Perl 6 - the future is here, just unevenly distributed

IRC log for #perl6-dev, 2017-10-25

| Channels | #perl6-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 Zoffix timotimo: well I don't know how to parse out the relevant bits
00:03 MasterDuke timotimo: in https://stackoverflow.com/questions/46867216/regex-speed-in-perl-6, his second version where he loops over his tokens could be made a lot faster by changing `for @search -> $token {` to `for @search -> Str $token {` and then removing the '{}' in the grep
00:04 MasterDuke i don't have an SO account
00:12 Zoffix AlexDaniel`: I don't get how to use your golfed server... It seems everything just quits right away and no tests get run
00:13 Zoffix AlexDaniel`: with this: https://gist.github.com/zoffixznet/88d58e5789b568a45b086b1499fdc689
00:17 Zoffix AlexDaniel`: nm, was mising `use` for Cliet
00:20 Zoffix The one thing I'm noticing changing `whenever $sock.Supply -> $got {` to `$sock.Supply.act: -> $got {` kinda makes the behaviour the same (broken on both 2017.09 and HEAD)... So are the docs wrong that `whenever` is like calling .act? https://docs.perl6.org/language/concurrency#index-entry-whenever
00:20 Zoffix s: &WHENEVER
00:20 SourceBaby Zoffix, Sauce is at https://github.com/rakudo/rakudo/blob/eb1febd56/src/core/Supply.pm#L2009
00:20 Zoffix ahh
00:21 ugexe i thought react/whenever makes sure everything gets initiated properly before you start it
00:21 Zoffix ahhh
00:22 timotimo MasterDuke: do you have a factor of speed improvement? or fraction of time spent?
00:23 timotimo gotta go to sleep before i report on it though
00:23 timotimo gnite!
00:23 MasterDuke timotimo: on my machine went from 1.4s to .5s
00:23 Zoffix \o
00:23 MasterDuke later...
00:24 timotimo so subtracting startup it might have been 1.3 to 0.4 so like 3.3x faster?
00:24 timotimo anyway, laters
00:25 MasterDuke probably about that yeah
00:52 llfourn joined #perl6-dev
01:08 BenGoldberg joined #perl6-dev
01:21 travis-ci joined #perl6-dev
01:21 travis-ci Rakudo build failed. Zoffix Znet 'Bump NQP'
01:21 travis-ci https://travis-ci.org/rakudo/rakudo/builds/292329779 https://github.com/rakudo/rakudo/compare/b19e352eca61...eb1febd56583
01:21 travis-ci left #perl6-dev
01:21 buggable [travis build above] ✓ All failures are due to: GitHub connectivity (1 failure).
01:40 llfourn joined #perl6-dev
01:57 ilbot3 joined #perl6-dev
01:57 Topic for #perl6-dev is now Perl 6 language and compiler development 2.0 | Logs at http://irclog.perlgeek.de/perl6-dev/today | For toolchain/installation stuff see #perl6-toolchain | For MoarVM see #moarvm
02:10 * Zoffix has successfully extracted WebSocket module from the buggy code
02:11 Zoffix ZofBot: on to HTTP::Tiny::Server-ectomy!
02:11 ZofBot Zoffix, timotimo++ :-)
02:11 Zoffix hah
02:42 Zoffix All modules out. 23-line golf now \o/
02:58 Zoffix 3 lines + 2 lines of a sleep and a brace.
02:58 Zoffix Good enough to debug it now...
03:04 Zoffix man, would be nice to eat this bug....
03:04 Zoffix ZofBot: I bet it's delishus
03:04 ZofBot Zoffix, He pushed it away from him, aware only that it was the memory of some action which he would have liked to undo but could not
03:06 ZofBot joined #perl6-dev
03:06 Zoffix ZofBot: how's your brain now?
03:06 ZofBot Zoffix, _I am just off to the theatre
03:06 Zoffix OK then
03:07 ZofBot joined #perl6-dev
03:29 Zoffix holy shit. I fixed it!
03:29 Zoffix Now, onto figure out why :)
04:02 Zoffix So "supervisor" doesn't watch the afinity workers, does it?
04:03 Geth ¦ rakudo/nom: 176a6fae07 | (Zoffix Znet)++ | src/core/ThreadPoolScheduler.pm
04:03 Geth ¦ rakudo/nom: Fix incorrect queue size measurement
04:03 Geth ¦ rakudo/nom:
04:03 Geth ¦ rakudo/nom: We're mistakenly calling .elems on the AffinityWorker, which will
04:03 Geth ¦ rakudo/nom: just return 1, messing up our measures of which worker is less busy.
04:03 Geth ¦ rakudo/nom:
04:03 Geth ¦ rakudo/nom: Use its .queue instead; we already grabbed it into a var a few lines up.
04:03 Geth ¦ rakudo/nom: review: https://github.com/rakudo/rakudo/commit/176a6fae07
04:04 Zoffix (that's not THE bug fix; just something else I spotted)
04:15 Zoffix Well, I think imma give up and add a cheatsy fix and a test and jnthn++ can then check it out and see how to make it sane
04:25 Geth ¦ roast: 74445ddf8a | (Zoffix Znet)++ | MISC/bug-coverage-stress.t
04:25 Geth ¦ roast: Add test for hang in supply in a sock
04:25 Geth ¦ roast:
04:25 Geth ¦ roast: https://github.com/tokuhirom/p6-WebSocket/issues/15#issuecomment-339120879
04:25 Geth ¦ roast: RT #132343
04:25 Geth ¦ roast: review: https://github.com/perl6/roast/commit/74445ddf8a
04:25 synopsebot RT#132343 [new]: https://rt.perl.org/Ticket/Display.html?id=132343 [REGRESSION] better-sched and other async improvement ecosystem fallout
04:25 Zoffix ^ That's the golf of the issue
04:27 AlexDaniel` Zoffix: awesome!!
04:28 Zoffix stresstesting a fix that fixes it ATM. But it's just a hack that adds another worker
04:30 AlexDaniel` actually that sounds about right?
04:31 AlexDaniel` anyway, I'm leaving for ≈8 hours, see you later
04:31 AlexDaniel` o/
04:31 Zoffix Thinking more of it, it might not even fix the module, just the test.
04:31 Zoffix Well. I'm guessing jnthn will come back tomorrow and it'll give him an idea of what's to fix :)
04:32 Zoffix ZOFVM: Files=1283, Tests=152774, 160 wallclock secs (22.04 usr  3.93 sys + 3415.68 cusr 203.89 csys = 3645.54 CPU)
04:32 AlexDaniel` ZofBot: death to timezones!
04:32 ZofBot AlexDaniel`, The change in the mater is marvellous
04:33 Geth ¦ rakudo/nom: ce7e5444a2 | (Zoffix Znet)++ | src/core/ThreadPoolScheduler.pm
04:33 Geth ¦ rakudo/nom: Add hackish fix deadlock for supply in a sock
04:33 Geth ¦ rakudo/nom:
04:33 Geth ¦ rakudo/nom: https://github.com/tokuhirom/p6-WebSocket/issues/15#issuecomment-339120879
04:33 Geth ¦ rakudo/nom: RT #132343
04:33 Geth ¦ rakudo/nom: Test: https://github.com/perl6/roast/commit/74445ddf8a
04:33 Geth ¦ rakudo/nom:
04:33 Geth ¦ rakudo/nom: Just pop in another worker in a case where we have just one and
04:33 Geth ¦ rakudo/nom: whose queue is empty. This fixes the bug demonstrated by the test,
04:34 Geth ¦ rakudo/nom: but it doesn't address the core cause as I don't understand it.
04:34 Geth ¦ rakudo/nom:
04:34 Geth ¦ rakudo/nom: Need proper fixin'.
04:34 Geth ¦ rakudo/nom: review: https://github.com/rakudo/rakudo/commit/ce7e5444a2
04:34 AlexDaniel` Zoffix++ great progress
04:34 * AlexDaniel` &
04:37 Zoffix Yup. Just as I figured. It fixes the test but not the module -_-
04:38 Zoffix dammit
04:39 Zoffix Well, the other fix would be to remove :hint-affinity in .queue methods in Async sock and make it use regular queue
04:41 Zoffix .tell jnthn I golfed AlexDaniel`'s issue into a test: https://github.com/perl6/roast/commit/74445ddf8a and committed a hack that makes the test pass: https://github.com/rakudo/rakudo/commit/ce7e5444a2  but WebSocket module still fails its tests because I'm guessing it got more than one AffiniteWorker active so teh deadlock still happens and isn't fixed by my hack. No idea why the deadlock actually occurs
04:41 yoleaux Zoffix: I'll pass your message to jnthn.
04:41 Zoffix so dunno how to fix. My hack needs to be reverted
04:42 Zoffix .tell jnthn so dunno how to fix. My hack needs to be reverted
04:42 yoleaux Zoffix: I'll pass your message to jnthn.
04:42 * Zoffix drops to bed
04:51 AlexDaniel` I think there's only one affinity worker there
05:13 travis-ci joined #perl6-dev
05:13 travis-ci Rakudo build passed. Zoffix Znet 'Fix incorrect queue size measurement
05:13 travis-ci https://travis-ci.org/rakudo/rakudo/builds/292427250 https://github.com/rakudo/rakudo/compare/eb1febd56583...176a6fae076a
05:13 travis-ci left #perl6-dev
05:45 AlexDaniel` joined #perl6-dev
05:49 AlexDaniel` joined #perl6-dev
05:58 travis-ci joined #perl6-dev
05:58 travis-ci Rakudo build passed. Zoffix Znet 'Add hackish fix deadlock for supply in a sock
05:58 travis-ci https://travis-ci.org/rakudo/rakudo/builds/292434436 https://github.com/rakudo/rakudo/compare/176a6fae076a...ce7e5444a2c4
05:58 travis-ci left #perl6-dev
06:55 wander joined #perl6-dev
07:36 [Tux] test-t           3.417 -  3.631 (/me runs it again)
07:39 [Tux] This is Rakudo version 2017.09-503-gce7e5444a built on MoarVM version 2017.09.1-622-g6e9e89ee
07:39 [Tux] csv-ip5xs        1.183 -  1.200
07:39 [Tux] test            11.841 - 11.852
07:39 [Tux] test-t           3.110 -  3.152
07:39 [Tux] csv-parser      12.454 - 13.021
07:39 AlexDaniel` joined #perl6-dev
07:54 [Tux] http://tux.nl/Files/20171025095340.png ← that was in my spam folder. perl5 or perl6? Spam or not? (I'm not going to answer that)
08:04 JimmyZ joined #perl6-dev
08:04 JimmyZ [Tux]: I confirmed it, it's not a spam
08:06 [Tux] so, should I (try to) answer that?
08:07 [Tux] FWIW I am not a mac user, so anyone else could do a better job then my saying "zef install Text::CSV" of "cpan "Text::CSV"
08:07 JimmyZ [Tux]: yeah, he is a developer
08:07 [Tux] s/" o\Kf/r/
08:08 [Tux] sent
08:09 JimmyZ I don't know about he too, just confirmed it from qq.com :p, and thanks
08:10 * [Tux] commutes ...
09:04 Zoffix AlexDaniel`: like in total? Nah. I can repro the bug again in my test if I just fire off a few listen socks before the original test: https://gist.github.com/zoffixznet/2b2364005f15dfc887b0226a1a199536
09:05 Zoffix And yeah, it bails out exactly where I expected it would, in the loop before my hack
09:09 SourceBaby joined #perl6-dev
09:10 Zoffix oh sweet. found another piece of a puzzle
09:32 Zoffix starting to understand the bug
09:32 Zoffix ... get in its head. Oh yeah...
09:56 pmurias joined #perl6-dev
09:59 lizmat Zoffix: looks like ce7e5444a2c4aa69c2e has a severe performance effect on hyper: test-t --hyper from 1.16 -> 1.47 seconds  :-(
10:00 lizmat argh, sorry for the n oise
10:00 lizmat I also had a 1202 running :-(
10:00 Zoffix :)
10:01 Zoffix FWIW R#1209 has some failures without pack/unpack involved, but I didn't quite understand if it were just the tests or if it were meant to work
10:01 synopsebot R#1209 [open]: https://github.com/rakudo/rakudo/issues/1209 Most Blob/Buf subtypes are specced, documented, broken and unusable
10:02 lizmat I already replied to that: I'd rather see pack/unpack deprecated -> removed and PackUnpack distro worked on
10:02 Zoffix yeah
10:04 samcv what is the issue with pack/unpack? is there some other alternative for working with binary data? as i know it we have tons of functions for worknig with strings but none really for extracting stuff from binary data
10:05 samcv or is it just that it's sufficiently complex? maybe we need some simple things that can do parts of what pack does without tons of overhead?
10:05 samcv so you can at least extract parts of a buffer as certain types of data even if we don't have the full template type thing that pack/unpack has hmm
10:06 Zoffix I only remember someone having an experimental impl of it ages ago (2015?) and no one caring about pack/unpack really. Seems only a handful of Perl 5 users kinda expect that feature to be a given ¯\_(ツ)_/¯
10:07 * DrForr was considering Spreadsheet::WriteExcel, IIRC it needs pack()/unpack() pretty extensively...
10:07 DrForr (just checked the tests...
10:07 DrForr )
10:08 samcv i might be willing to work on pack/unpack since i think it's important, even if it doesn't seem that important, if we don't have it then it would be a gap in perl6 imo
10:08 lizmat my "dream" is still a combination of a special encoding and syntactic sugar that would allow you to use the regular expression engine on binary data
10:09 samcv that would be great
10:09 lizmat all we really need I think, is a way to encode each byte value to a synthetic
10:09 lizmat and a way to specify that synthetic in a regular expression
10:09 samcv is that efficient?
10:10 samcv if you had a lot of data
10:10 DrForr samcv: Agreed, it's needed, especially important for Excel and friends, which for better or for worse still runs a lot of businesses.
10:10 samcv i remember when i was working with Buf the main thing sticking out was, there aren't many ops to work with them
10:10 * lizmat was hoping [Tux] would be sufficiently inspired to take over PackUnpack  :-)
10:10 samcv you can create buf's but... using them..
10:11 |Tux| if I was jobless and motivated enough, sure :P
10:11 lizmat hehe  :-)
10:12 samcv is pack/unpack in perl6 supposed to basically be a copy of perl5's functionality?
10:12 lizmat well, that's the question
10:12 lizmat pack/unpack comes from a completely untyped world
10:13 lizmat it always felt like a poor fit for the more typed Perl 6 world to me
10:16 samcv that could very well be the case. though i do think there needs to be some way to work with Buf's hmm
10:17 DrForr I think too it's more akin to "Here's a C data structure serialized, really it should be a C library but too much work."
10:19 |Tux| DrForr - for *me* pack/unpack is a very convenient way to store  data in hashes and hash *KEY*s to be able to sort on combined keys and efficiently and FAST pass data around perl5 process-flows
10:19 |Tux| serialization is not my first goal
10:20 |Tux| «pack "s>s>s>", 101, 25, 101;» is something I use hundreds of times throughout my perl5 apps
10:22 |Tux| a second reason is that on old machines with not enough memory, packed strings are way more efficient in large hashes that anonymous lists of the same content
10:26 DrForr Okay, that's fair enough. Would a library (like Liz's suggestion) still meet performance goals? Just thinking about being able to test different implementations...
10:26 |Tux| For the systems I need that for, perl6 is not an option :)
10:27 |Tux| I bet it won't even build on an old HP-UX 11.11 with just 1 Gb of RAM
10:27 lizmat DrForr: there is such a library already: http://modules.perl6.org/search/?q=PackUnpack
10:27 DrForr The old machines, certainly I can see that not being an option.
10:27 Geth ¦ nqp/master: 6 commits pushed by pmurias++
10:27 Geth ¦ nqp/master: 79b7ae0509 | [js] Handle EDGE_CHARRANGE_M when building NFAs
10:27 Geth ¦ nqp/master: 7e7e5768a9 | Fix test description
10:27 Geth ¦ nqp/master: 1c4e1ef7c3 | [js] Make the exceptions stack per fiber
10:27 Geth ¦ nqp/master: 6de6e3a609 | [js] Fix bugs when using default list_i values
10:27 Geth ¦ nqp/master: b81bed8b3a | Test using indexes from a list_i with default values
10:27 Geth ¦ nqp/master: 0418b0b0b4 | [js] Implement nqp::multidimref_*
10:27 Geth ¦ nqp/master: review: https://github.com/perl6/nqp/compare/8fa082b269...0418b0b0b4
10:27 lizmat [Tux]: I think libuv doesn't live on HP-UX, so that's a nono to start with
10:29 lizmat Zoffix: another datapoint on the #1202 saga: I can't get the code to crash if I change the say to a print
10:29 AlexDaniel` GH#1202
10:29 synopsebot GH#1202 [open]: https://github.com/rakudo/rakudo/issues/1202 [severe] Async qqx sometimes hangs or dies ( await (^5).map({start { say qqx{… …} } }) )
10:30 lizmat I changed it to  print qqx{ ... }.chomp
10:30 lizmat so it would take up less screen space
10:32 lizmat ahhh.. got it to crash after 7 minutes
10:32 lizmat while it was running a spectest at the same time
10:34 lizmat ok, now after 43 secs
10:34 lizmat seems I only generate noise today  :(
10:35 * Zoffix wasn't following 1202
10:45 * Zoffix backlogs #perl6
10:46 Zoffix I'm with moritz... perl++ is an awful name and despite the OP of the blog post quoting two of my articles feels like they missed the point of why I wanted to rename :/
10:47 Zoffix For the moment I'm behind TimToady's "psix with p silent if you want to" name. It seemed to have a bunch of support from people (even more if you include those who like "P6" variant) and I like the somewhat secret (at least it was to me) reference to some literature or whatever it was to :)
10:51 gfldex I wonder if we should add a non-parallel version of ». before adding autothreading. With >. one could fix fallout from the change easily.
10:51 Zoffix huggable: psix :is: psix is reference to https://en.wikipedia.org/wiki/Psmith
10:51 huggable Zoffix, Added psix as psix is reference to https://en.wikipedia.org/wiki/Psmith
10:52 lizmat update on GH #1202: if I put a print before the await, it doesn't seem to crash (running more than 10 minutes now)
10:52 synopsebot GH#1202 [open]: https://github.com/rakudo/rakudo/issues/1202 [severe] Async qqx sometimes hangs or dies ( await (^5).map({start { say qqx{… …} } }) )
10:52 lizmat which would indicate a race condition on the encoder...
10:54 lizmat *initialization of
10:59 Zoffix and update on RT#132343: bug seems something to do with $*AWAITER. The changes to it are commits https://github.com/rakudo/rakudo/commit/547839200a772e26ea164e9d1fd8c9cd4a5c2d9f and https://github.com/rakudo/rakudo/commit/26a9c313297a21c11ac30f02349497822686f507 that mention queuing and deadlocking... seems like exactly what's happening now and I'm guessing the new mechanism queues up the stuff to run
10:59 Zoffix afterwards but "afterwards" doesn't happen or something. Like this code hangs https://gist.github.com/zoffixznet/4254b946539b6e2e6431f84957c2835a but if I shove the .list of supply {} into a separate promise, it works fine: https://gist.github.com/zoffixznet/59f4b46d361511efff1a74b373fff6fa
10:59 synopsebot RT#132343 [open]: https://rt.perl.org/Ticket/Display.html?id=132343 [REGRESSION] better-sched and other async improvement ecosystem fallout
10:59 * Zoffix gives up on it for now.
11:09 Geth ¦ rakudo/nom: 794235a381 | (Zoffix Znet)++ | src/core/ThreadPoolScheduler.pm
11:09 Geth ¦ rakudo/nom: Revert "Add hackish fix deadlock for supply in a sock"
11:09 Geth ¦ rakudo/nom:
11:09 Geth ¦ rakudo/nom: This reverts commit ce7e5444a2c4aa69c2e4421f02a287241199318e.
11:09 Geth ¦ rakudo/nom:
11:09 Geth ¦ rakudo/nom: This commit is pointless and doesn't fix the bug in real-life code.
11:09 Geth ¦ rakudo/nom: review: https://github.com/rakudo/rakudo/commit/794235a381
11:09 lizmat :-(
11:11 Geth ¦ roast: 45dcf9cd65 | (Zoffix Znet)++ | MISC/bug-coverage-stress.t
11:11 Geth ¦ roast: TODO-fudge supply-in-a-sock deadlock test
11:11 Geth ¦ roast:
11:11 Geth ¦ roast: Also fire off a few more socks to cover the deadlock that
11:11 Geth ¦ roast: https://github.com/rakudo/rakudo/commit/794235a381 did not fix
11:11 Geth ¦ roast: review: https://github.com/perl6/roast/commit/45dcf9cd65
11:18 Geth ¦ roast: 7df4b4c4dd | (Zoffix Znet)++ | packages/Test/Util.pm
11:18 Geth ¦ roast: Prevent generation of `typescript` file by run-with-tty test
11:18 Geth ¦ roast:
11:18 Geth ¦ roast: It's made by the `script` command and looks like there's
11:18 Geth ¦ roast: a difference between Bodhi Linux and Debian `script` impls
11:18 Geth ¦ roast: as to how the filename for that needs to be specified
11:18 Geth ¦ roast: review: https://github.com/perl6/roast/commit/7df4b4c4dd
11:19 Zoffix Considering Bodhi Linux is a fork of Ubuntu that's a fork of Debian, makes me wonder how flimsy that test really is :/
11:38 Geth ¦ rakudo/affinity-worker-workaround: 418dbbd8a3 | (Zoffix Znet)++ | src/core/IO/Socket/Async.pm
11:38 Geth ¦ rakudo/affinity-worker-workaround: Use general queue in async sock
11:38 Geth ¦ rakudo/affinity-worker-workaround:
11:38 Geth ¦ rakudo/affinity-worker-workaround: Workaround for RT#132343 and
11:38 Geth ¦ rakudo/affinity-worker-workaround: https://github.com/tokuhirom/p6-WebSocket/issues/15#issuecomment-339120879
11:38 Geth ¦ rakudo/affinity-worker-workaround: that simply bypasses the affinity queue and uses the general one.
11:38 Geth ¦ rakudo/affinity-worker-workaround:
11:38 Geth ¦ rakudo/affinity-worker-workaround: This like undos the benefits mentioned in
11:38 synopsebot RT#132343 [open]: https://rt.perl.org/Ticket/Display.html?id=132343 [REGRESSION] better-sched and other async improvement ecosystem fallout
11:38 Geth ¦ rakudo/affinity-worker-workaround: <…commit message has 5 more lines…>
11:38 Geth ¦ rakudo/affinity-worker-workaround: review: https://github.com/rakudo/rakudo/commit/418dbbd8a3
11:39 Zoffix .tell AlexDaniel this branch fixes the bug in WebSocket but it does it by just avoiding the affinity queue: https://github.com/rakudo/rakudo/commit/418dbbd8a3  I guess that's fine if a release has to be made ¯\_(ツ)_/¯
11:39 yoleaux Zoffix: I'll pass your message to AlexDaniel.
11:40 Zoffix .tell jnthn reverted my hack and hardened the test to cover the stuff my hack didn't fix. A bit more debug info on the bug here: https://irclog.perlgeek.de/perl6-dev/2017-10-25#i_15351230
11:40 yoleaux Zoffix: I'll pass your message to jnthn.
11:40 * Zoffix &
11:47 Geth ¦ nqp: 3fbc06669b | pmurias++ | src/vm/jvm/runtime/org/perl6/nqp/runtime/Ops.java
11:47 Geth ¦ nqp: [jvm] Implement nqp::ordbaseat
11:47 Geth ¦ nqp: review: https://github.com/perl6/nqp/commit/3fbc06669b
11:49 Geth ¦ nqp: 4c2ffcb144 | pmurias++ | 6 files
11:49 Geth ¦ nqp: [jvm] Implement nqp::multidimref_* ops
11:49 Geth ¦ nqp: review: https://github.com/perl6/nqp/commit/4c2ffcb144
11:51 Geth ¦ nqp: 4e2d7c7bfa | pmurias++ | t/moar/07-eqatic.t
11:51 Geth ¦ nqp: Fix typo in test description
11:51 Geth ¦ nqp: review: https://github.com/perl6/nqp/commit/4e2d7c7bfa
11:51 Geth ¦ nqp: b8f7784d40 | pmurias++ | t/nqp/102-multidim.t
11:51 Geth ¦ nqp: Test nqp::multidimref_* ops
11:51 Geth ¦ nqp: review: https://github.com/perl6/nqp/commit/b8f7784d40
11:56 AlexDaniel` joined #perl6-dev
11:58 Geth ¦ rakudo/js: 580a232e2f | pmurias++ | 3 files
11:58 Geth ¦ rakudo/js: Use multidimref_* on all backends
11:58 Geth ¦ rakudo/js: review: https://github.com/rakudo/rakudo/commit/580a232e2f
12:33 pmurias lizmat++ Zoffix++ # fixing .grab bug that was causing a extra test failure on the js backend
12:34 * Zoffix doesn't remember fixing anything like that...
12:38 * lizmat only vaguely remembers about nativeints and bigints
12:44 perlpilot joined #perl6-dev
12:49 pmurias Zoffix: you wrote the test ;)
12:49 Zoffix Ah ok :)
14:05 AlexDaniel` joined #perl6-dev
14:06 lizmat afk for a few hours
14:28 Geth ¦ rakudo/nom: 97b11edd61 | pmurias++ | src/core/Buf.pm
14:28 Geth ¦ rakudo/nom: Use nqp::bitneg_i instead of a nqp::bitxor_i and a mask
14:28 Geth ¦ rakudo/nom: review: https://github.com/rakudo/rakudo/commit/97b11edd61
14:47 gfldex where does „Too many arguments in flattening array.“ come from?
14:50 Zoffix $ grep -FRn 'Too many arguments in flattening array' nqp
14:50 Zoffix nqp/MoarVM/src/core/args.c:742:                MVM_exception_throw_adhoc(tc, "Too many arguments in flattening array.");
14:51 gfldex code that is triggering it: https://gist.github.com/gfldex/b356f074c480a11d6ddd08cfbb42e5bd
14:51 Zoffix m: (1...100000).race
14:51 camelia rakudo-moar 97b11edd6: ( no output )
14:52 gfldex RaceSeq is lazy
14:52 Zoffix m: @ = (1...100000).race
14:52 camelia rakudo-moar 97b11edd6: ( no output )
14:52 Zoffix *shrug* that's the error you get when you try to slip too many args
14:52 Zoffix m: say |(1...100000)
14:52 camelia rakudo-moar 97b11edd6: OUTPUT: «Too many arguments in flattening array.␤  in block <unit> at <tmp> line 1␤␤»
14:53 Zoffix So maybe the guts are slipping too much somewhere
14:53 Zoffix --ll-exception will probably tell where
14:53 * gfldex looks
14:56 Zoffix So the WebSocket issue hangs here: https://github.com/rakudo/rakudo/blob/nom/src/core/Supply.pm#L1948
14:58 gfldex stactrace: https://imgur.com/a/ilqgg
14:59 gfldex m: say [max] (1..100000)
14:59 camelia rakudo-moar 97b11edd6: OUTPUT: «Too many arguments in flattening array.␤  in block <unit> at <tmp> line 1␤␤»
15:00 * gfldex uses issue paper
15:03 gfldex there are likely more such bugs lurking, because 1 year ago nobody would have dared to touch a 100000 element list :->
15:04 gfldex m: say [max] (1..2**15)
15:04 camelia rakudo-moar 97b11edd6: OUTPUT: «32768␤»
15:04 gfldex m: say [max] (1..2**16)
15:04 camelia rakudo-moar 97b11edd6: OUTPUT: «Too many arguments in flattening array.␤  in block <unit> at <tmp> line 1␤␤»
15:04 AlexDaniel` is it a bug?
15:05 gfldex AlexDaniel`: how would you justify a reduction operator that can't reduce large lists?
15:06 Zoffix AlexDaniel`: you have a robo message for AlexDaniel... Dunno when you wanted to do the release...
15:06 AlexDaniel` Zoffix: I've seen it, thanks
15:07 AlexDaniel` Zoffix: there's a wild jnthn on github, but I don't know if he'll respond to anything related to the sched issue
15:07 Zoffix :)
15:07 AlexDaniel .
15:07 yoleaux 11:39Z <Zoffix> AlexDaniel: this branch fixes the bug in WebSocket but it does it by just avoiding the affinity queue: https://github.com/rakudo/rakudo/commit/418dbbd8a3  I guess that's fine if a release has to be made ¯\_(ツ)_/¯
15:09 AlexDaniel` I just came home so I'll wait a bit before doing any bad decisions :)
15:10 AlexDaniel` gfldex: fwiw, have you seen this? https://docs.perl6.org/language/traps#Argument_Count_Limit
15:11 Zoffix So what do you do when you can't just swap `.push` to `.append`?
15:11 AlexDaniel` why can't you?
15:12 Zoffix That limit is a bit of a thorn. 'cause any time you're doing `.something: |@a` you're risking a crash if you don't know how bit @a is
15:12 AlexDaniel` Zoffix: yes, so the point is that with the current implementation you should not do that
15:12 Zoffix AlexDaniel`: 'cause there's no alternative routine?
15:12 AlexDaniel` m: say (1..2**16).reduce(&max)
15:12 camelia rakudo-moar 97b11edd6: OUTPUT: «65536␤»
15:14 AlexDaniel` Zoffix: dunno, if I ever get into that situation I'd just cry
15:14 Zoffix m: class Foo { method process(@x) {} }.new.process: 42, |(1..100000)
15:14 camelia rakudo-moar 97b11edd6: OUTPUT: «Too many arguments in flattening array.␤  in method process at <tmp> line 1␤  in block <unit> at <tmp> line 1␤␤»
15:14 Zoffix m: class Foo { method process(*@x) {} }.new.process: 42, |(1..100000)
15:14 camelia rakudo-moar 97b11edd6: OUTPUT: «Too many arguments in flattening array.␤  in method process at <tmp> line 1␤  in block <unit> at <tmp> line 1␤␤»
15:15 AlexDaniel` m: class Foo { method process(@x) { say @x } }.new.process: @(42,|(1..100000))
15:15 camelia rakudo-moar 97b11edd6: OUTPUT: «(42 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 8…»
15:15 AlexDaniel` m: class Foo { method process(@x) { say @x } }.new.process: (42,|(1..100000))
15:15 camelia rakudo-moar 97b11edd6: OUTPUT: «(42 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 8…»
15:19 Zoffix m: class Foo { method process(**@x) {@x.elems.say} }.new.process: (42, |(1..100000))
15:19 camelia rakudo-moar 97b11edd6: OUTPUT: «1␤»
15:19 AlexDaniel` argh, there was a ticket asking to change the behavior of […] with one arg
15:20 AlexDaniel` can't find it
15:20 AlexDaniel` ah RT128758
15:20 AlexDaniel` ah RT#128758
15:20 synopsebot RT#128758 [open]: https://rt.perl.org/Ticket/Display.html?id=128758 Reduce with numeric ops does not numify things if only one arg is passed ([*] set(1,2,3))
15:21 AlexDaniel` no, that's not it
15:21 AlexDaniel` but very close
15:21 Zoffix m: say &infix:<*>(set 1, 2, 3)
15:21 camelia rakudo-moar 97b11edd6: OUTPUT: «3␤»
15:21 Zoffix m: say [*] set 1, 2, 3
15:21 camelia rakudo-moar 97b11edd6: OUTPUT: «3␤»
15:29 jnthn evening, #perl6 o/
15:29 yoleaux 23 Oct 2017 19:55Z <lizmat> jnthn: I wonder whether the difference between .hyper and .race is really whether results are buffered or not
15:29 yoleaux 23 Oct 2017 19:55Z <lizmat> jnthn: I could see .race internally working as a Supply, emitting values from several threads as they become available
15:29 yoleaux 23 Oct 2017 19:57Z <lizmat> jnthn: instead of .pushing to an IterationBuffer until the end of the batch
15:29 yoleaux 24 Oct 2017 12:29Z <lizmat> jnthn: I think I've reduced https://github.com/rakudo/rakudo/issues/1202 to a pure MoarVM issue
15:29 yoleaux 24 Oct 2017 20:48Z <AlexDaniel`> jnthn: if it happens that you come back before we figure it out, here is a thing to look at https://github.com/tokuhirom/p6-WebSocket/issues/15#issuecomment-339120879 (the release was delayed for other reasons anyway, so would be great to fix this thing also)
15:29 yoleaux 04:41Z <Zoffix> jnthn: I golfed AlexDaniel`'s issue into a test: https://github.com/perl6/roast/commit/74445ddf8a and committed a hack that makes the test pass: https://github.com/rakudo/rakudo/commit/ce7e5444a2  but WebSocket module still fails its tests because I'm guessing it got more than one AffiniteWorker active so teh deadlock still happens and isn't fixed by my hack. No idea why the deadlock actually occurs
15:29 yoleaux 04:42Z <Zoffix> jnthn: so dunno how to fix. My hack needs to be reverted
15:29 yoleaux 11:40Z <Zoffix> jnthn: reverted my hack and hardened the test to cover the stuff my hack didn't fix. A bit more debug info on the bug here: https://irclog.perlgeek.de/perl6-dev/2017-10-25#i_15351230
15:29 jnthn o.O
15:30 Zoffix \o/
15:31 timotimo greetings jnthn
15:31 Zoffix FWIW: <Zoffix> So the WebSocket issue hangs here: https://github.com/rakudo/rakudo/blob/nom/src/core/Supply.pm#L1948
15:31 timotimo no need to do point releases this time, at least not yet :P
15:32 AlexDaniel` timotimo: ಠ_ಠ
15:32 timotimo ftr i'm glad we haven't hit the big red release button yet
15:32 jnthn Oh, no release?
15:32 jnthn Why?
15:33 timotimo things be broken
15:33 jnthn Such as?
15:33 Geth ¦ rakudo/affinity-debugging-stuff: ca46390fcd | (Zoffix Znet)++ | 2 files
15:33 Geth ¦ rakudo/affinity-debugging-stuff: Share affinity debugging info
15:33 Geth ¦ rakudo/affinity-debugging-stuff: review: https://github.com/rakudo/rakudo/commit/ca46390fcd
15:33 AlexDaniel` jnthn: e.g. getc was not working on macos, now we're looking at the sched bug
15:34 AlexDaniel` (both things regressed)
15:34 jnthn Hm, getc? Interesting.
15:34 jnthn That one got fixed?
15:34 AlexDaniel` yeah https://github.com/MoarVM/MoarVM/pull/731
15:34 Zoffix FWIW ^ that branch got debug prints I added all over the place and this is the working code https://gist.github.com/zoffixznet/86d1eaa8897bbe7c092d1368a99f64c0 and if you comment out that "start @ =" it'll hang
15:35 Zoffix running with ZZ=1 ZZ4=1 perl6 teh-script
15:35 jnthn ah, ok
15:35 Zoffix Actually it won't hang it just won't print the "but not here" line
15:35 Zoffix But in real code that's a hang
15:36 jnthn On affinity scheduling stuff, my guess is that something manages to create a dependency between two affinity-scheduled things. That would be a kinda odd thing to have happen, but I guess test code that ends up with client and server in the same program could hit such things.
15:37 jnthn It's also far more likely to happen under 6.c await semantics
15:37 jnthn A reasonable thing to do is probably to have the supervisor look to see if there are affinity workers making no progress, and just steal work from them into the general queue.
15:38 Zoffix problem exists in v6.d.PREVIEW too
15:38 jnthn We can't really fix it in terms of creating more affinity workers.
15:38 Zoffix Supervisor doesn't currently care about affinity workers tho, right?
15:38 jnthn Right
15:38 Zoffix OK
15:39 jnthn Does the problem case in question have a client and server in the same program?
15:39 Zoffix Yes
15:39 jnthn Where the first connects to the second?
15:39 Zoffix Yup
15:39 jnthn OK, then the solution I suggested will probably do it
15:39 AlexDaniel` is it really required to reproduce it?
15:39 jnthn AlexDaniel`: My guess would be yes
15:39 Zoffix OK. Unless someone beats me to it, I'll give it a go in 5hr
15:39 AlexDaniel` I thought having server and client separately makes no difference
15:40 AlexDaniel` at least that's how I repro-ed it in the first place
15:40 Zoffix This is the code that repros the bug and it got server and client in same code. Didn't try having them in separate programs: https://github.com/perl6/roast/blob/master/MISC/bug-coverage-stress.t#L92-L104
15:40 * Zoffix &
15:41 jnthn Hmmm
15:41 jnthn But in that code it's using a sync socket to do the testing
15:42 jnthn Also
15:42 jnthn IO::Socket::Async.listen: '127.0.0.1', 15556 + $_ for ^10;
15:42 jnthn The supplies are never tapped, so it won't actually use up any sockets at all there?
15:42 jnthn s/use/fire/
15:43 jnthn ohhh... .list :/
15:43 jnthn Goodness, that's asking for trouble
15:44 jnthn But yeah, the fix I suggested will do it
15:44 AlexDaniel` yes, just tried it, the client can be in a separate script
15:44 jnthn Yeah, it's the .list
15:45 jnthn It blocks up an affinity worker
15:45 AlexDaniel` jnthn: also, not sure if you saw it or not, but https://github.com/jnthn/oo-actors/issues/6 and https://github.com/jnthn/p6-test-scheduler/issues/3
15:46 jnthn Yeah, test-scheduler needing attention doesn't surprise me in the slightest, given how much the real scheduler has changed.
15:46 jnthn The oo-actors one is more surprising
15:47 AlexDaniel` my understanding is that both are non-issues for the release, but I don't know if there are any bigger underlying problems with these
15:47 AlexDaniel` so let me know if there's anything important
15:49 jnthn .tell lizmat The difference between hyper/race is whether we - at the end of the pipeline - just hand results back whenever they are ready, or instead ensure we hand back results relative to their input order. All the other differences that pop up are based on what you can get away with under race, but couldn't under hyper.
15:49 yoleaux jnthn: I'll pass your message to lizmat.
15:53 jnthn .tell lizmat A Supply emitting results whenever they're available isn't a partiuclarly good design for hyper/race, as that implies concurrency control per result, not per batch of results.
15:53 yoleaux jnthn: I'll pass your message to lizmat.
15:57 jnthn m: my $n = BagHash.new: "a" => 0, "b" => 1, "c" => 2, "c" => 2; say $n.perl
15:57 camelia rakudo-moar 97b11edd6: OUTPUT: «(:c(2)=>2,:b(1),:a(0)).BagHash␤»
15:57 jnthn m: my $n = BagHash.new-from-pairs: "a" => 0, "b" => 1, "c" => 2, "c" => 2; say $n.perl
15:57 camelia rakudo-moar 97b11edd6: OUTPUT: «("b","c"=>4).BagHash␤»
15:58 jnthn It might be less confusing if the .new example at https://docs.perl6.org/type/BagHash#Creating_BagHash_objects didn't use pairs that look like they'll be providing weights as its first example
15:59 timotimo cool, sounds like the release could be close to go-mode :)
16:00 Zoffix Filed as D#1629
16:00 synopsebot D#1629 [open]: https://github.com/perl6/doc/issues/1629 [docs][Hacktoberfest][LHF] Improve BagHash.new examples
16:00 jnthn Zoffix++
16:01 [Coke] samcv: no one is building docs on windows, as the makefile is busted with nmake.
16:01 [Coke] so I'll rip out the non-async variant of the highlighter.
16:03 jnthn Noticed it while commenting on https://github.com/rakudo/rakudo/issues/1203
16:05 jnthn Still a bit tired from travelling back from vacation, so gonna go rest some
16:06 jnthn Zoffix: I doubt I'll find energy tonight to look at the affinity scheduler thing, though if you don't manage it I can look tomorrow probably. Hints: remember that $!affinity-workers and other such Lists are immutable and so should always be read from the attribute exactly once into a lexical, and then only ever accessed through that lexical, to get a consistent view.
16:07 Zoffix OK
16:07 jnthn Also, I think some kind of "how many times did we ask since this worker last completed an item" may be needed
16:07 jnthn We don't want to steal *too* eagerly
16:08 jnthn Otherwise we lose the point of affinity scheduling
16:08 jnthn I guess "how many times did we see it not make progress" is perhaps a better way of expressing it
16:09 * jnthn bbl
16:35 Zoffix buggable is ded again :|
16:36 buggable joined #perl6-dev
16:36 Zoffix buggable: zen gimme some
16:36 buggable Zoffix, "Zen has no business with ideas."
16:36 Zoffix Great :(
17:16 Zoffix m: with $*TMPDIR.add: "foo" { .spurt: buf64.new: 1000000000, 2, 3; .s.say; .slurp(:bin).say }
17:16 camelia rakudo-moar 97b11edd6: OUTPUT: «'/tmp/foo' is a directory, cannot do '.open' on a directory␤  in block <unit> at <tmp> line 1␤␤»
17:17 Zoffix m: with $*TMPDIR.add: "foofdasdsadas" { .spurt: buf64.new: 1000000000, 2, 3; .s.say; .slurp(:bin).say }
17:17 camelia rakudo-moar 97b11edd6: OUTPUT: «write_fhb requires a native array of uint8 or int8␤  in block <unit> at <tmp> line 1␤␤»
17:18 [Coke] why is it "my role X::Temporal is Exception { }" but "my role X::Comp { ... }"
17:19 [Coke] (in rakudo source) - found while trying to true up the type graph
17:20 [Coke] (er, the type graph in perl6/doc)
17:22 * timotimo would like to see the supervisor thread do less cpu usage
17:22 timotimo though perhaps it got faster since the last time i looked
17:25 ugexe m: sub foo(+x = 1) { x }; say foo();
17:25 camelia rakudo-moar 97b11edd6: OUTPUT: «===SORRY!===␤At Frame 2, Instruction 1, op 'param_sp' has invalid number (3) of operands; needs 2.␤»
17:26 AlexDaniel` ouch
17:26 llfourn joined #perl6-dev
17:27 AlexDaniel` c: 2015.07.2 sub foo(+x = 1) { x }; say foo();
17:27 committable6 AlexDaniel`, ¦2015.07.2: «===SORRY!=== Error while compiling /tmp/P3T_Fy6_zM␤Malformed parameter␤at /tmp/P3T_Fy6_zM:1␤------> sub foo(⏏+x = 1) { x }; say foo();␤    expecting any of:␤        formal parameter «exit code = 1»»
17:27 AlexDaniel` that's a bit better :)
17:27 timotimo we only support +@x, yeah?
17:28 ugexe yeah, its about the error message
17:28 Zoffix m: sub foo(+x) { dd x }(42)
17:28 camelia rakudo-moar 97b11edd6: OUTPUT: «(42,)␤»
17:28 timotimo or maybe the scheduler could see that cpu usage has been > 1% and slow down a little bit
17:29 timotimo or maybe if all queues are empty, it could sleep until any queue got items pushed to it?
17:29 Zoffix m: sub foo(*@x = 42) { dd @x }(42)
17:29 camelia rakudo-moar 97b11edd6: OUTPUT: «5===SORRY!5=== Error while compiling <tmp>␤Cannot put default on slurpy parameter @x␤at <tmp>:1␤------> 3sub foo(*@x = 427⏏5) { dd @x }(42)␤    expecting any of:␤        constraint␤»
17:30 Zoffix m: sub foo(+@x = 42) { dd @x }(42)
17:30 camelia rakudo-moar 97b11edd6: OUTPUT: «===SORRY!===␤At Frame 2, Instruction 1, op 'param_sp' has invalid number (3) of operands; needs 2.␤»
17:30 AlexDaniel` here's the commit that added +@foo feature: https://github.com/rakudo/rakudo/commit/1152728af8e6f45e6e4504c14495747406b6eb37
17:31 Zoffix K, I see the fix
17:31 * Zoffix hackety hacks
17:34 lizmat timotimo: how do you know that the supervisor thread uses so much ?
17:34 yoleaux 15:49Z <jnthn> lizmat: The difference between hyper/race is whether we - at the end of the pipeline - just hand results back whenever they are ready, or instead ensure we hand back results relative to their input order. All the other differences that pop up are based on what you can get away with under race, but couldn't under hyper.
17:34 yoleaux 15:53Z <jnthn> lizmat: A Supply emitting results whenever they're available isn't a partiuclarly good design for hyper/race, as that implies concurrency control per result, not per batch of results.
17:35 timotimo time perl6 -e 'my $p = Proc::Async.new("echo"); $p.start; sleep 30'  -  2% cpu
17:35 timotimo it's not actually terrible
17:36 AlexDaniel` I was also thinking about this line: https://github.com/rakudo/rakudo/commit/61a77e60a7d936415503d8916fcc7546569e9135#diff-6d7bfa05d538ae828eff330472ec119fR462
17:36 AlexDaniel` how well is it optimized away when the env var is not set?
17:37 Zoffix It's just +    sub scheduler-debug-status($message) {
17:37 Zoffix +        if $scheduler-debug-status {
17:37 timotimo shouldn't be terrible, though you pay for a sub invocation each time
17:37 Zoffix +            note "[SCHEDULER] $message";
17:37 Zoffix +        }
17:37 Zoffix +    }
17:37 AlexDaniel` timotimo: what about the construction of a str?
17:38 ugexe the `[max] (1..1000000)` bug can be avoided by using values.Slip instead of |values, but that breaks a spectest where it wants Bool::False from [^^] () eq Bool::False but fails because its inside a slip (so probably some signature tweak/reordering is needed)
17:39 ugexe here https://github.com/rakudo/rakudo/blob/nom/src/core/metaops.pm#L414
17:39 timotimo oh, right
17:39 timotimo if it were a macro that put a check for "do we want debug?" everywhere it'd skip that
17:40 timotimo it also does a sum over the last n measurements
17:41 timotimo though i imagine there'd be no time where $!general-queue and $!timer-queue would both be undefined but the supervisor is running anyway?
17:42 Zoffix there will be tonight when it's taught to prod the affinity workers too
17:43 AlexDaniel` timotimo: and yeah, I also noticed that it does too much when it shouldn't be doing anything
17:44 timotimo that's only interesting if it doesn't also pass the smoothed work time to the prodding sub
17:45 AlexDaniel` (cpu usage I mean)
17:46 timotimo i don't have good ideas here. except we could instantiate the utilization values to be 5 values and throw out the check for the size of the array :P
17:49 Zoffix Why exactly can't slurpies have default values?
17:50 Zoffix Just NYI?
17:53 [Coke] (X::Comp) ah, it's just setting predclaration issues.
17:55 Zoffix Guess there's an issue on when a slurpy arg is meant to be assumed as "missing" to use the default. OK then
17:58 Geth ¦ rakudo/nom: a92950fb4f | (Zoffix Znet)++ | src/Perl6/Actions.nqp
17:58 Geth ¦ rakudo/nom: Fix poor error with some slurpies with defaults
17:58 Geth ¦ rakudo/nom:
17:58 Geth ¦ rakudo/nom: Bug find: https://irclog.perlgeek.de/perl6-dev/2017-10-25#i_15352740
17:58 Geth ¦ rakudo/nom: review: https://github.com/rakudo/rakudo/commit/a92950fb4f
17:59 Geth ¦ roast: 1a0162d8f0 | (Zoffix Znet)++ | S06-signature/defaults.t
17:59 Geth ¦ roast: Test all slurpies throw helpful error with defaults
17:59 Geth ¦ roast:
17:59 Geth ¦ roast: Bug find: https://irclog.perlgeek.de/perl6-dev/2017-10-25#i_15352740
17:59 Geth ¦ roast: Rakudo fix: https://github.com/rakudo/rakudo/commit/a92950fb4f
17:59 Geth ¦ roast: review: https://github.com/perl6/roast/commit/1a0162d8f0
18:10 ugexe m: sub foo(+$x [$ is rw = False]) { $x }; say foo().perl;
18:10 camelia rakudo-moar 97b11edd6: OUTPUT: «Unhandled exception: concatenate requires a concrete string, but got null␤   at SETTING::src/core/Exception.pm:395  (/home/camelia/rakudo-m-inst-1/share/perl6/runtime/CORE.setting.moarvm:print_exception)␤ from SETTING::src/core/Exception.pm:452  (…»
18:11 lizmat afk again to Thor some Ragnarok&
18:15 timotimo oooh, i'm looking forward to seeing thor ragnarok, too
18:18 Zoffix Files as R##1211
18:18 Zoffix Filed as R#1211
18:18 synopsebot R#1211 [open]: https://github.com/rakudo/rakudo/issues/1211 LTA error with `is rw` defaults in an unpacked slurpy
18:21 ugexe m: sub foo(+$x [$a is rw = False]) { $x }; say foo().perl # ftr
18:21 camelia rakudo-moar a92950fb4: OUTPUT: «5===SORRY!5=== Error while compiling <tmp>␤Cannot use 'is rw' on optional parameter '$a'.␤at <tmp>:1␤»
18:39 Zoffix add on the Issue :)
18:45 * [Coke] is jealous again of liz getting to see Marvel stuff so much sooner. :)
18:49 [Coke] why do we have Cursor if it's just an alias for Match?
18:49 [Coke] m: Cursor.^mro.say ; Match.^mro.say;
18:49 camelia rakudo-moar a92950fb4: OUTPUT: «((Match) (Capture) (Cool) (Any) (Mu))␤((Match) (Capture) (Cool) (Any) (Mu))␤»
18:49 timotimo it used to be different
18:49 timotimo both still exist so that existing code doesn't break
18:50 [Coke] I ask because that test in the docs for mro fails because Capture is unique.
18:50 timotimo ah, hmm
18:50 [Coke] (in having something other than itself as the first item in the mro.)
18:51 [Coke] can we kill it in 6.d?
19:02 [Coke] (I have the doc issue fixed locally, anyway)
19:05 [Coke] m: say Failure.^mro
19:05 camelia rakudo-moar a92950fb4: OUTPUT: «((Failure) Nil (Cool) (Any) (Mu))␤»
19:06 [Coke] m: say ~::("Failure").^mro.map: *.^name;
19:06 camelia rakudo-moar a92950fb4: OUTPUT: «Failure Nil Cool Any Mu␤»
19:11 * Zoffix is using Synergy to KVM from a Windows 10 desktop to a Windows 7 laptop to remote-desktop into a Win7 desktop to ssh to a Ubuntu desktop to ssh to a Debian Wheezy server to git push a repo to gitlab to be able to pull it onto a Bodhi Linux VM running inside Windows 10 host
19:11 Zoffix ZofBot: the future is here!
19:11 ZofBot Zoffix, I'd rather kill a black cat than lose you
19:11 samcv please don't ZofBot
19:13 Zoffix ZofBot: I rather you lose me than kill any cat :(
19:13 ZofBot Zoffix, 'Do it again,' said Sidney, all grin and sleek immaculateness
19:22 AlexDaniel` .oO( “I'd rather kill a black…” O_O “… cat” oh… )
19:25 evalable6 joined #perl6-dev
19:25 AlexDaniel` evalable6: what is wrong with you /o\
19:25 evalable6 AlexDaniel`, rakudo-moar a92950fb4: OUTPUT: «(exit code 1) ===SORRY!===␤Unrecognized regex metacharacter \ (must be q…»
19:25 evalable6 AlexDaniel`, Full output: https://gist.github.com/e89e70ef9209d0b8c1ad20fce45ce871
19:26 llfourn joined #perl6-dev
19:26 AlexDaniel` evalable6: WHAT <is wrong> with ‘you’ ~~ /o/
19:26 evalable6 AlexDaniel`, rakudo-moar a92950fb4: OUTPUT: «»
20:08 Zoffix <timotimo> that's only interesting if it doesn't also pass the smoothed work time to the prodding sub
20:08 Zoffix So far I DOn't see the reason to pass it
20:22 Zoffix and yeah even right now supervisor starts up when there only affinity workers
20:29 evalable6 joined #perl6-dev
20:59 Zoffix :/ my affinity worker fix worked the first time :\
21:00 Zoffix Wonder if that means I messed it up :P
21:10 [Coke] Zofbot--
21:10 Zoffix jnthn: any ballpark for what to consider stealing the queue too early? My first take is 10 times we saw a busy worker not complete anything, which I'm guessing is about .1s
21:10 Zoffix [Coke]: why? :(
21:12 timotimo perhaps time for exponential back-off?
21:12 timotimo or quadratic back-off who knows
21:12 * Zoffix has no idea what that is...
21:12 [Coke] I can't get rid of him. at least it's only down to the occasional ping I see these days.
21:13 Zoffix But why do you hate it so much that you want to get rid of it? :)
21:13 jnthn Zoffix: 0.1 is a bit high for the first time we do it perhaps
21:14 jnthn Zoffix: But I guess get it to work at all and then we can tune it
21:14 Zoffix OK
21:18 Zoffix ZOFVM: Files=1283, Tests=152773, 153 wallclock secs (21.03 usr  3.34 sys + 3302.43 cusr 167.90 csys = 3494.70 CPU)
21:19 jnthn Does it always fix the problem, or only sometimes? :)
21:21 Zoffix looks like always
21:22 Geth ¦ rakudo/nom: 43b7cfde31 | (Zoffix Znet)++ | src/core/ThreadPoolScheduler.pm
21:22 Geth ¦ rakudo/nom: Fix deadlock with affinity workers
21:22 Geth ¦ rakudo/nom:
21:22 Geth ¦ rakudo/nom: Fixes https://github.com/tokuhirom/p6-WebSocket/issues/15#issuecomment-339120879
21:22 Geth ¦ rakudo/nom: and RT#132343: https://rt.perl.org/Ticket/Display.html?id=132343
21:22 Geth ¦ rakudo/nom:
21:22 Geth ¦ rakudo/nom: Make supervisor keep an eye on affinity workers and if we spot any
21:22 Geth ¦ rakudo/nom: that are working and haven't completed anything for a while, steal
21:22 synopsebot RT#132343 [open]: https://rt.perl.org/Ticket/Display.html?id=132343 [REGRESSION] better-sched and other async improvement ecosystem fallout
21:22 Geth ¦ rakudo/nom: their queue into general queue. Per:
21:22 Geth ¦ rakudo/nom: https://irclog.perlgeek.de/perl6-dev/2017-10-25#i_15352262
21:22 Geth ¦ rakudo/nom: review: https://github.com/rakudo/rakudo/commit/43b7cfde31
21:23 Geth ¦ roast: e997143e09 | (Zoffix Znet)++ | MISC/bug-coverage-stress.t
21:23 Geth ¦ roast: Unfudge now-passing supply-in-a-sock deadlock test
21:23 Geth ¦ roast:
21:23 Geth ¦ roast: Rakudo fix: https://github.com/rakudo/rakudo/commit/43b7cfde31
21:23 Geth ¦ roast: review: https://github.com/perl6/roast/commit/e997143e09
21:23 Zoffix jnthn: ^ that's the fix. Hope it doesn't do something insane:)
21:28 llfourn joined #perl6-dev
21:40 jnthn Zoffix: Yes, but please write concurrency code using normal if/loop etc rather than nqp:: ops
21:40 jnthn Oh, also I was going to steal one item at a time from the affinity queue
21:40 jnthn Also
21:40 jnthn $!state-lock.protect: {
21:40 Zoffix OK. I just saw nqp used around that area.
21:41 jnthn oh god
21:41 Zoffix uh-oh :P
21:41 jnthn Did I really do that
21:41 * AlexDaniel` is dropping to bed. Have a nice * everyone
21:41 Zoffix \o
21:42 AlexDaniel` Zoffix: ♥ your work
21:42 AlexDaniel` I* :)
21:44 jnthn No, I didn't
21:44 jnthn I...really, really do not wish to maintain code written using nqp::if, nqp::while, etc.
21:45 Zoffix jnthn: yeah, I'm changing it now and I'll make it take just 1 item
21:45 jnthn I don't mind it happening in other bits of CORE.setting, but concurrency stuff...it's hard enough as it is.
21:45 Zoffix noted :)
21:45 jnthn I don't think that lock is needed, btw
21:46 jnthn Unfortunately, there's a data race too
21:46 jnthn Well
21:46 jnthn Yeah, it could actually hang the supervisor if we're super unlucky (which means, it will happen some day...)
21:47 jnthn +                        nqp::while( +                          nqp::elems($worker-queue), +                          nqp::push($!general-queue, nqp::shift($worker-queue)))
21:47 jnthn oops, that pasted horribly
21:47 jnthn Anyways, nqp::shift there takes from the work queue
21:47 jnthn But at this point, we're in a race with the affinity worker itself
21:47 jnthn So this can happen:
21:47 jnthn 1. Supervisor calls nqp::elems, it's 1
21:48 jnthn 2. Affinity worker gets done with its last work item, does a shift on the queue
21:48 jnthn 3. Supervisor calls shift on an empty queue, blocks
21:49 jnthn A better way would be to use nqp::pollqueue($worker-queue)
21:49 Zoffix There's a comment at  has Lock $!state-lock = Lock.new; that says "# All of the worker and queue state below is guarded by this lock." so I figured I had to lock if I touched $!general-queue
21:49 jnthn No, that's guarding the lists of workers
21:49 jnthn The queues themselves are actually concurrent queues
21:49 Zoffix Ah, ok
21:49 jnthn And so they are safe to use from many threads
21:50 jnthn Anyway, the poll thingy will return nqp::null if the queue is empty
21:50 Zoffix grep -FR 'pollqueue' . gives me nothing Is that right thing?
21:50 jnthn heh, no...I forgot what the op is really called :)
21:50 Zoffix ah queuepoll
21:50 jnthn oh, right :)
21:51 jnthn So instead of nqp::elems, just poll, if the thing that comes out isn't null then nqp::push it to the general queue
21:51 Zoffix OK
21:51 jnthn If it is null, there was nothing in the queue, so do nothing
21:51 jnthn nqp::queuepoll never blocks
21:51 jnthn So it can't ever hang the supervisor
21:51 Zoffix \o/
21:51 jnthn Those things aside, this looks sensible
21:52 jnthn Zoffix++
21:53 Zoffix \o/
22:03 jnthn Sleep time, 'night o/
22:04 Zoffix \o
22:15 Zoffix :/ t/spec/S17-channel/stress.t "hung"
22:16 Zoffix hm, it's randomness-based
22:21 Zoffix m: loop { my @p = < p e r l >.pick: *; if [!after] @p { dd @p; last } }; say now - INIT now
22:21 camelia rakudo-moar 43b7cfde3: OUTPUT: «Array @p = ["e", "l", "p", "r"]␤0.0138634␤»
22:21 Zoffix m: loop { my @p = < p e r l >.pick: *; if [!after] @p { dd @p; last } }; say now - INIT now
22:21 camelia rakudo-moar 43b7cfde3: OUTPUT: «Array @p = ["e", "l", "p", "r"]␤0.0073785␤»
22:22 Zoffix :S wonder why test is so slow.
22:22 Zoffix 4m58.317s and I had to kill it
22:24 gfldex even on a fast machine pick-sort can take a while :)
22:25 Zoffix but above it doesn't
22:26 Zoffix I see now why AlexDaniel` pointed out perf of scheduler-debug-status... It gets called like a billion times
22:27 Zoffix *gazillion (I didn't really count or anything)(
22:29 Zoffix ZOFVM: Files=1283, Tests=152773, 156 wallclock secs (21.43 usr  3.84 sys + 3369.68 cusr 182.74 csys = 3577.69 CPU)
22:30 Zoffix Ahhhh
22:33 Zoffix When the box is busy with the rest of the stresstest, the heuristic for deadlock adds more workers and test passes. If the test ends up closer to end or whatever, the rest of the stresstest doesn't generate enough CPU use for deadlock heuristic to get triggered, so it never happens. I can't get it to complete when running with just RAKUDO_SCHEDULER_DEBUG=1 ./perl6 t/spec/S17-channel/stress.t on 24-core box
22:33 Zoffix and on my home 4-core box I can't get it to complete 'cause it adds more workers but it also noms a ton of RAM. I run out of RAM before it gets a chance to complete
22:34 Zoffix So it means there's another scheduler LTAness
22:35 Geth ¦ rakudo/nom: 59bfa5ab37 | (Zoffix Znet)++ | src/core/ThreadPoolScheduler.pm
22:35 Geth ¦ rakudo/nom: Polish off affinity worker prodder
22:35 Geth ¦ rakudo/nom:
22:35 Geth ¦ rakudo/nom: Per https://irclog.perlgeek.de/perl6-dev/2017-10-25#i_15353842
22:35 Geth ¦ rakudo/nom:
22:35 Geth ¦ rakudo/nom: - Improve readability
22:35 Geth ¦ rakudo/nom: - Steal only one item from the queue
22:35 Geth ¦ rakudo/nom: - Prevent potential supervisor deadlock from elems'ing the queue
22:35 Geth ¦ rakudo/nom:     and having a race empty it and for nqp::shift() to block
22:35 Geth ¦ rakudo/nom: review: https://github.com/rakudo/rakudo/commit/59bfa5ab37
23:05 evalable6 joined #perl6-dev
23:07 Zoffix m: my @last-utils = ^5; for ^3 { @last-utils = @last-utils.rotate; @last-utils[0] = Int(rand*10); }; say @last-utils
23:07 camelia rakudo-moar 59bfa5ab3: OUTPUT: «[9 4 0 8 3]␤»
23:07 Zoffix m: my int @last-utils = ^5; for ^3 { @last-utils = @last-utils.rotate; @last-utils[0] = Int(rand*10); }; say @last-utils
23:07 camelia rakudo-moar 59bfa5ab3: OUTPUT: «4 4 0 0 1 2 3 4 6 6 2 3 4 0 0 1 2 3 4 4 4 3 4 0 0 1 2 3 4 6 6 2 3 4 0 0 1 2 3 4␤»
23:07 Zoffix fun :")
23:26 Zoffix Man the scheduler debug status thing is expensive
23:27 Zoffix Gonna comment it out and after release add a #?debug build preprocessor directive and stick it under there
23:30 llfourn joined #perl6-dev
23:45 Zoffix m, I guess not; doesn't even show up in profile, so screw it.
23:45 timotimo don't forget the profiler doesn't understand multithreaded programs yet
23:46 Zoffix ZOFFLOP: t/spec/S15-nfg/many-threads.t # segfaulted
23:46 timotimo so it's very unlikely that it'd show up even if it was a significant time cost
23:46 Zoffix timotimo: ohhh. right.
23:46 * timotimo is still waiting impatiently for wrists to recover >_<
23:46 Zoffix In time measurement for the sub, it turns up at .233s for 100_000 iterations
23:46 Zoffix m: say (0.233032/1000)
23:46 camelia rakudo-moar 59bfa5ab3: OUTPUT: «0.000233032␤»
23:47 Zoffix So it costs us .2ms for every rakudo program
23:47 Zoffix per second
23:48 Geth ¦ rakudo/nom: 27590e8bc7 | (Zoffix Znet)++ | src/core/ThreadPoolScheduler.pm
23:48 Geth ¦ rakudo/nom: Make supervisor per-core-util calculator 2.8x faster
23:48 Geth ¦ rakudo/nom:
23:48 Geth ¦ rakudo/nom: Most of this is from rewriteing &push to .push so it doesn't go
23:48 Geth ¦ rakudo/nom: through slippy candidate
23:48 Geth ¦ rakudo/nom: review: https://github.com/rakudo/rakudo/commit/27590e8bc7
23:49 Zoffix If I add the scheduler-debug-status() into the bench, that turns up as 2.3x faster and if I remove the scheduler-debug-status in the "new" version, it ends up 3.56x faster
23:50 Zoffix Meh, gonna leave it in. With more users using the release with the new scheduler it might come in handy to ask users to dump the status
23:50 * Zoffix looks into ./perl6 t/spec/S17-channel/stress.t issue...
23:51 Zoffix timotimo: your wrists still sore? :o been a long time.
23:51 Zoffix If I get RSI, I just stay 100% off the computer for a weekend and it goes away

| Channels | #perl6-dev index | Today | | Search | Google Search | Plain-Text | summary