Perl 6 - the future is here, just unevenly distributed

IRC log for #moarvm, 2016-06-22

| Channels | #moarvm index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:22 FROGGS_ joined #moarvm
01:48 ilbot3 joined #moarvm
01:48 Topic for #moarvm is now https://github.com/moarvm/moarvm | IRC logs at  http://irclog.perlgeek.de/moarvm/today
06:08 rurban_ joined #moarvm
06:47 leedo joined #moarvm
06:55 mtj__ joined #moarvm
07:06 harrow joined #moarvm
07:17 Ven_ joined #moarvm
07:52 Ven_ joined #moarvm
07:54 lizmat joined #moarvm
08:35 jnthn moarning o/
08:35 diakopter ooo///
08:38 nwc10 that it is
08:43 jnthn Also, Wednesday, meaning I get to do Perl 6 and Moar stuffs all day :)
08:43 * diakopter eyes stuffs like a turkey on Thanksgiving
08:44 diakopter .. I must be hungry
09:37 jnthn ...well, I thought all day, then got grabbed for something else. But now I'm free. :)
09:39 domidumont joined #moarvm
09:44 domidumont joined #moarvm
10:11 domidumont joined #moarvm
10:59 jnthn For anyone curious what I'm up to: re-working the multi-dispatch cache to handle named args, but also hopefully to be more efficient too. :)
11:11 jnthn lunch &
11:48 Ven_ joined #moarvm
11:48 stmuk I'm returning 'cd nqp && make m-bootstrap-files' on different platforms/compilers (both amd64) and getting different results for sha1sum src/vm/moar/stage0/nqp.moarvm
11:48 stmuk s/returning/running
11:49 stmuk I assume this is expected?
12:03 lizmat joined #moarvm
12:07 jnthn stmuk: How about if you do it with the same compiler in two runs? But yes, NQP uses timestamps at present to distinguish the bootstrap phases.
12:08 nwc10 we should use MD4s to make it both reproducable, and hackable for comedy value :-/
12:09 jnthn We should do many things, but only have resources to do a fraction of them. :)
12:11 jnthn A smarter fix would be to use the sha1 of the source as today but append a -1 or -2 depending on bootstrap phase
12:12 stmuk jnthn: ah and I can also see absolute paths
12:13 stmuk I just wondering if its possible to reproduce the moarvm blobs checked in
12:14 jnthn Not unless the timestamps can be fiddled, I suspect
12:14 stmuk libfaketime is used in reproducible gcc builds I believe
12:29 stmuk I suppose if I used the same windows path, worked out the time and used "RunAsDate" it might work ..
12:30 stmuk and traced *all* the steps back .. or just give up :)
12:55 avar joined #moarvm
13:02 brrt joined #moarvm
13:08 brrt good * #moarvm
13:08 brrt y'know whats /also/ high up the priority list
13:08 brrt aside from 'getting more hours per day'
13:08 brrt (we should move to mars.. mars has longer days?)
13:09 nwc10 I can't remember. Venus has *much* longer days. But the weather is always crap
13:09 brrt as a brit that shouldn't put you off?
13:10 brrt me fixing the insertion of spills
13:10 brrt and loads
13:10 nwc10 it's a different scale of "crap". And a nice cup of tea isn't good enough to compensate
13:12 lizmat joined #moarvm
13:13 nwc10 google claims 1d 0h 40m
13:13 nwc10 Mars, Length of day
13:14 nwc10 so that's actually closer to what I thought I read that the body does (without sunlight to force a schedule)
13:15 nwc10 what it's failing to tell me is what temperature water would boil at. (0.6% of Earth's atmosphere, but I dont' know how to translate that to "how cold is my tea then?")
13:15 brrt it differs greatly per person (biological clock)
13:16 brrt average is about 25 hours, but some people have much longer, others somewhat shorter
13:16 nwc10 anywy, if the choice is only Mars or Venus, I'll go for Mars.
13:16 brrt not hot enough to be worth the drink, i think
13:17 brrt and, i don't have enough physics on me to figure out what the boiling temperature would be
13:19 geekosaur ice sublimates, iirc
13:20 geekosaur so "boiling" point is below freezing point
13:20 brrt that's right, i think
13:21 brrt ice tea it is then
13:52 domidumont joined #moarvm
13:59 lizmat joined #moarvm
14:28 zostay joined #moarvm
14:35 arnsholt If I remember Red Mars correctly, ice sublimates on Mars, yeah
14:42 lizmat joined #moarvm
14:42 moritz water ice?
14:44 moritz on https://en.wikipedia.org/wiki/Triple_point#/media/File:Phase_diagram_of_water.svg you can see the conditions for liquid water
14:45 moritz basically if you have 0.1 atmospheres, it's only liquid from ~0C to 50C
14:45 moritz oh, but mars has only 6mbar pressure
14:45 moritz that's... not much
14:46 brrt 6mbar, 600 Pa?
14:46 moritz 600Pa, and the triple point of water is at 611Pa
14:46 brrt so, sublimation, indeed
14:46 moritz right
14:47 brrt actually, on mars most of it will be vapor
14:47 brrt anyway, dinner &
14:48 moritz of course, googling "does water sublimate on Mars?" would have been easier
14:48 moritz but figuring that out based on the phase diagram offers some kind of gratification :-)
14:59 lizmat joined #moarvm
15:42 nine That was a short nap...
15:42 lizmat nine: I close my laptop when moving from one room to another  :-)
16:03 domidumont joined #moarvm
16:13 lizmat joined #moarvm
16:17 dalek MoarVM/new-multi-cache: cffe519 | jnthn++ | src/ (3 files):
16:17 dalek MoarVM/new-multi-cache: Start exploring a new multi-cache design.
16:17 dalek MoarVM/new-multi-cache:
16:17 dalek MoarVM/new-multi-cache: This can handle named parameters also, and uses callsite interning to
16:17 dalek MoarVM/new-multi-cache: do less checks, and a tree rather than array or arrays for similar
16:17 dalek MoarVM/new-multi-cache: reasons. Still very much in testing, and re-integration with spesh
16:17 dalek MoarVM/new-multi-cache: still to come so probably slower in all the cases where that helped.
16:17 dalek MoarVM/new-multi-cache: review: https://github.com/MoarVM/MoarVM/commit/cffe51998b
16:18 jnthn Note that Rakudo will also need patches to be able to take advantage of this, and that it's still not especially tested/polished.
16:21 timotimo the other day before falling asleep i was wondering if we should keep around an array of MVMCodepoint32 for the very common ascii range and maybe also a bunch of combinings (like up to -128 or something) and if a string is just one character long (like when we comb or substr) we'd just point into that array
16:22 jnthn Gets annoying when you gotta free it afterwards :)
16:22 timotimo if we have that array once per instance, it'll be a simple range check, though. maybe it's fine
16:23 jnthn What's the benchmark we're aiming to help, ooc?
16:23 timotimo it'll surely be a bit cheaper than mallocing a 4-byte big area, or mallocing a strand
16:23 timotimo i don't have one. as i said, it was during that strange time before falling asleep :)
16:23 jnthn I wonder a bit if a cache of MVMString * for the ASCII range would be easier and a bigger win
16:23 jnthn So if you see you're getting a 1-char string in that range you check if re-use is an option.
16:24 jnthn Which'd save a helluva lot of junk production with .comb for example
16:24 timotimo a few months earlier i actually thought we could totally store a single grapheme into the pointer we'd usually use to point at a strand or MVMGrapheme32 array
16:24 jnthn That's also an option :)
16:24 timotimo but that'd probably want special-casing in every single string op
16:24 jnthn Yeah, it's also a pain like that :)
16:25 jnthn Could be fragile
16:25 timotimo to be fair, it could even store two graphemes. that'll make it a few times more painful, though
16:27 jnthn Time for a break :)
16:27 * timotimo is AFK for a bit, too
17:40 lizmat joined #moarvm
18:20 domidumont joined #moarvm
18:23 cognominal joined #moarvm
18:30 cognominal joined #moarvm
18:34 FROGGS joined #moarvm
18:36 brrt joined #moarvm
18:37 brrt good * again
18:37 FROGGS o/
18:40 brrt \o FROGGS
18:41 brrt i wonder what kind of program we could organise for MOARCONF
18:41 nwc10 would there be breaks for talks between the coffee?
18:42 brrt if the venue allows it
18:42 FROGGS O.o
18:42 FROGGS MOARCONF?
18:43 brrt yeah, like YAPC, but for moarvm or something
18:43 FROGGS and it would potentially happen in middle europe I guess
18:44 FROGGS central*
18:44 brrt we talked, or joked, about it yesterday
18:44 brrt but yeah
18:45 * lizmat is +1
18:46 FROGGS aye
18:46 FROGGS would be very very awesome to let jnthn explain awesome architectury stuff, and then we hack :D
18:47 brrt lizmat++ suggested at or after APW. that is not a bad idea
18:47 brrt it's just that it'll be the third weekend of my working life and it might not be possible / a good idea to ask for free days already
18:50 brrt so selfishly i'd think maybe somewhat later
19:21 lizmat joined #moarvm
20:42 timotimo i wonder if i should just go ahead and start on the string cache thing
20:44 * timotimo goes ahead
20:44 lizmat battery low&
21:05 rurban_ joined #moarvm
21:15 timotimo timo@schmetterling ~> time perl6 -e 'for lines() { .comb }' < /usr/share/dict/linux.words
21:15 timotimo ^- doesn't show a noticable improvement
21:16 timotimo it only takes about 3 seconds anyway, though
21:16 timotimo and only ~80 megs of ram
21:17 jnthn Well, you're racing the GC. GC costs something, but so will caching.
21:18 timotimo true. also, all those strings will die immediately
21:18 timotimo maybe i should instead have appended the result of .comb into a global list
21:19 timotimo huh, the number of GC runs doesn't even differ between the two runs; with and without my cache
21:19 timotimo 365 for each of the runs
21:20 jnthn hm, does your cache even get hit? :)
21:20 timotimo yeah, i had a piece of output for that
21:22 timotimo tbh it didn't output as often as i thought it would
21:23 timotimo with the list that keeps the results around i get fewer GC runs when the cache is active
21:25 timotimo i think it might have to do with the check i made for "is the source MVM_...GRAPHEME_32?" which i didn't rely on at all
21:25 timotimo i fiexd that just now and will re-run the tests
21:27 timotimo i'm considering adding a little piece of code into the JIT for chr that'll try the cache before it invokes the C function
21:28 timotimo 1226 GC runs vs 1334
21:29 timotimo ending in 4474 gen2 roots with the one and 10489 with the other
21:30 timotimo and at the 1226th GC run in the longer one, it's at 12693 gen2 roots
21:30 timotimo not entirely sure i understand that, but it's nice to have either way
21:34 timotimo jnthn: master, or into a branch?
21:35 timotimo also, you think there's another good benchmark for this?
21:35 timotimo the benchmark i have here is 42% (vs used-to-be 46%) GC time, btw
21:37 timotimo weird, i'm running into a whole lot of scalars allocated by dispatch:<!> (only a very small percentage compared to Scalars allocated in push, but still curious)
21:38 jnthn timotimo: Branch, I'd like to have a look at it. :)
21:38 timotimo good
21:40 timotimo and also i'm still convinced all the allocations for BOOTHash are either bogus (as in, shouldn't be in the profile) or bogus (as in, shouldn't be allocated)
21:42 jnthn Many are for *%_ :)
21:42 timotimo ha! i removed those scalars by having \name instead of $name
21:42 timotimo should we perhaps have a piece of analysis for those *%_? "this isn't assigned to, so just use shared global immutable empty hash"?
21:43 jnthn Well, spesh can eliminate it in many cases also
21:44 jnthn Unfortunately, spesh limitations prevent it doing so on various megamorthic things...like sink
21:44 timotimo in this case it's pull-one, and push twice
21:45 timotimo and, unsurprisingly, sink
21:45 timotimo and reification-target, iterator, append ... another sink
21:45 timotimo even "defined" :(
21:45 jnthn Yup
21:45 jnthn It's on my "things to deal with in the next spesh shake-up"
21:47 timotimo yes :)
21:47 timotimo i also found the pull-one of comb allocates a crapton of Int, i may have found the fix for it, too
21:48 timotimo 4953680 times invoked, 4473852 times allocated
21:48 timotimo not sure what's up with that :)
21:49 timotimo m: say 4953680 - 4473852
21:49 camelia rakudo-moar fd98fc: OUTPUT«479828␤»
21:49 timotimo m: say 4953680 / 4473852
21:49 camelia rakudo-moar fd98fc: OUTPUT«1.10725165␤»
21:50 timotimo 10%? kind of strange
21:51 timotimo hm, that didn't fix it
21:55 jnthn Time for me to rest...'night o/
21:55 timotimo gnite jnthn!
22:04 timotimo 1149 gc runs now
22:26 timotimo 4473852 IntAttrRef allocated by pull-one from comb; that'd probably be worth a bit when optimized away into just direct access
22:27 timotimo m: say 361 / (3449 + 361)
22:27 camelia rakudo-moar fd98fc: OUTPUT«0.094751␤»
22:27 lizmat joined #moarvm
22:28 timotimo ^- that's the percentage of how much stuff gets committed to gen2 during the comb - append benchmark
22:28 timotimo there's so much more wins to be had, all in all ...
22:29 timotimo weird, there's postfix:<--> being run on Num
22:33 lizmat dinner&
22:58 dalek MoarVM/short_string_cache: 4bbb2c6 | timotimo++ | src/ (5 files):
22:58 dalek MoarVM/short_string_cache: this adds a cache for strings that are one codepoint long
22:58 dalek MoarVM/short_string_cache:
22:58 dalek MoarVM/short_string_cache: in addition, i think it might be nice to also have one
22:58 dalek MoarVM/short_string_cache: long array of all codepoint values that could be in the
22:58 dalek MoarVM/short_string_cache: storages of these strings, but that'll add a check when
22:58 dalek MoarVM/short_string_cache: freeing MVMString objects. Might not be worth it.
22:58 dalek MoarVM/short_string_cache:
22:58 dalek MoarVM/short_string_cache: I didn't add a mutex because i think it's not needed;
22:58 dalek MoarVM/short_string_cache: updating the pointer shouldn't break when multiple threads
22:58 dalek MoarVM/short_string_cache: race it, and since the strings are gc-managed anyway,
22:58 dalek MoarVM/short_string_cache: things will properly be freed if they get overwritten
22:58 dalek MoarVM/short_string_cache: by another thread. Also, things that end in the same will
22:58 dalek MoarVM/short_string_cache: have equal contents, too.
23:21 diakopter dalek burn
23:29 cognominal joined #moarvm
23:32 timotimo sacrificing 1kbyte to have storage for all those one-char strings doesn't sound so terrible, tbh
23:33 timotimo special-casing one- and two-point storage in strings seems a little bit daunting, though. especially since we still have that hash stuff, don't we?

| Channels | #moarvm index | Today | | Search | Google Search | Plain-Text | summary