Perl 6 - the future is here, just unevenly distributed

IRC log for #moarvm, 2014-06-05

| Channels | #moarvm index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:47 jnap joined #moarvm
01:34 FROGGS_ joined #moarvm
04:33 FROGGS_ joined #moarvm
05:15 FROGGS_ joined #moarvm
06:07 oetiker joined #moarvm
06:07 oetiker_ joined #moarvm
06:08 _sri joined #moarvm
06:31 brrt joined #moarvm
06:42 brrt i'm still in dumb-question mode this morning, but
06:43 brrt an MVMCallSite is made by the caller, and has to be processed by the callee, right?
06:43 brrt so the callee has to be 'interpretative' of the MVMCallSite
06:44 brrt would it be plausible / possible at all to have the callee supply the MVMCallSite and have the caller build a 'correct' one?
06:44 brrt bbi30
07:08 zakharyas joined #moarvm
07:59 oetiker joined #moarvm
07:59 oetiker_ joined #moarvm
08:07 FROGGS_ joined #moarvm
08:13 jnthn brrt: Callsites are static (and even interned) rather than made, the exception being when we flatten.
08:14 jnthn So the caller has nothing to build.
08:15 jnthn Also, spesh actually does specialization partly by the shape of the incoming callsite.
08:16 jnthn So once you're looking at a spesh'd thing, in the common cases today things like checkarity and coercions are gone.
08:16 jnthn uh, checking if we need coercion, even.
08:19 brrt ok, but they're still a caller-supplied thing, right?
08:23 brrt i sometimes feel as if moarvm is overwhelmingly large
08:24 jnthn Yes, a call sequence is
08:25 jnthn prepargs <callsite>
08:25 jnthn arg instructions
08:25 jnthn invoke <invokee>
08:25 brrt ok
08:25 brrt i'll write that down
08:32 brrt its /probably/ a good idea to make a single node of that
08:32 jnthn Anyway, the caller doesn't always know what the callee will be.
08:32 jnthn Whereas callees know what they want.
08:33 brrt yes, that is true
08:33 brrt hmmm
08:33 brrt does spesh spesh on callsite? i.e. when compiling, can we (after a guard) assume the args to have a certain format?
08:34 brrt or is that something that is being worked on
08:34 donaldh joined #moarvm
08:35 * brrt afk for an hour
08:38 jnthn Yes
08:38 jnthn A specialization is always callsite + guards
08:38 jnthn That's why we intern callsites: to make the "does the callsite match" just a pointer comparison.
09:24 brrt joined #moarvm
09:30 brrt ok, i get it :-)
09:34 dalek MoarVM/moar-jit: d0bff6d | (Bart Wiegmans)++ | / (2 files):
09:34 dalek MoarVM/moar-jit: Add jit.h headers to installation, so including moar.h works when installed.
09:34 dalek MoarVM/moar-jit: review: https://github.com/MoarVM/MoarVM/commit/d0bff6d287
09:36 timotimo so i'm thinking about strength reduction again
09:36 timotimo doing a whole bunch of iterations of * 8 and then div 8 is only marginally slower than +< 3 and then +> 3
09:37 timotimo but * 8 and / 8 is about 10x slower; unsurprisingly, as / gives us rational number semantics
09:37 timotimo i'm not sure how to make such an optimization work properly, since it requires an integer value to be present, rather than a rat
09:38 timotimo and strength reduction at the bytecode level (or in spesh) would probably only hit places where we have already-int-typed stuff :\
09:40 timotimo at least, if the $i is int-typed, it's a bit faster than without it, but it requires "div" explicitly again
09:45 timotimo oh, wow, with a native-typed int, +< 3 and +> 3 is more than 2x faster than * 8 and div 8
09:47 jnthn Doing it with native types is probably a good place to start
09:49 timotimo aye, it also seems to be worth it
09:49 timotimo would you prefer i do it in spesh or in optimizer?
09:51 timotimo i wonder how often we have a known int value in spesh that we don't have in the optimizer
10:09 * brrt lunching
10:10 jnthn timotimo: Not sure, really...
10:11 jnthn timotimo: Putting it in spesh means we may well get more out of it because other things may reduce to things we can stregth-reduce...
10:18 brrt how is the order of optimizations determined, in spesh?
10:20 jnthn There's essentially 3 passes: fact addition/propagation, do the type-driven optimizations, do dead instruction elimination. So, just 3 passes at the moment.
10:22 jnthn Oh, and a 4th before all of that which is args-based.
10:22 jnthn (The one that looks at the callsite)
10:22 jnthn That's the piece I was patching yesterday.
10:30 brrt ok
10:30 brrt good for me to know :-)
10:32 donaldh joined #moarvm
10:35 * brrt afk
10:51 dalek MoarVM: cc338be | jnthn++ | src/io/asyncsocket.c:
10:51 dalek MoarVM: Implement cancelling listening on a socket.
10:51 dalek MoarVM: review: https://github.com/MoarVM/MoarVM/commit/cc338be068
11:11 dalek MoarVM: 3a69c1b | jnthn++ | / (3 files):
11:11 dalek MoarVM: Correct asyncreadbytes op signature.
11:11 dalek MoarVM: review: https://github.com/MoarVM/MoarVM/commit/3a69c1b9a7
11:15 brrt joined #moarvm
11:43 brrt i'm wondering how low the level of a jit graph should be
11:43 brrt lower is better
11:43 brrt but lower is also more work / memory
11:44 brrt jnthn, timotimo, i'd like some advice on what should be in the jit graph and how it differs from the spesh graph
11:44 brrt i /think/ i'd like to abstract operands to values, but i'm not sure if that is a move 'up' or 'down' as it were
11:45 jnthn brrt: If there's anything I've learned about AST-like things, it's that you tend not to need a huge number of nodes.
11:45 jnthn brrt: In this case, really it's mostly about what kinds of things happen at CPU level.
11:45 brrt ok, but thats much smaller than what happens at the moarvm level
11:46 brrt arithmetic, branching, load, store, thats most of it
11:46 jnthn Yup
11:46 jnthn I think the exception is probably CallCFunction
11:46 jnthn Which is a bit higher level
11:47 brrt ok so the idea is to have the speshgraph -> jitgraph do the heavy lifting of 'lowering' the code to jittable size
11:47 jnthn Yes. The stuff beyond JIT graph is architecture specific knowledge.
11:48 jnthn If we don't put the heavy lifting in speshgraph -> jitgraph, we'll have to repeat it for other architectures.
11:48 jnthn So it feels like the right place.
11:48 brrt i agree
11:48 dalek MoarVM: 6b19b4b | jnthn++ | src/io/asyncsocket.c:
11:48 dalek MoarVM: Implement async bytes reads from sockets.
11:48 dalek MoarVM: review: https://github.com/MoarVM/MoarVM/commit/6b19b4b2a4
11:49 brrt and i'm also thinking of abstracting the architecture-specific stuff into functions called 'jit_arithmetic', 'jit_branch', etc
11:49 brrt and have a header between the 'high level' jit and the 'architecture' jit
11:51 * brrt is afk again :-)
12:44 jnap joined #moarvm
13:26 harrow joined #moarvm
13:36 jnthn Errands done...time for more work :)
13:36 nwc10 work's work, or this work?
13:38 jnthn Moar work :)
13:41 FROGGS_ +1 from me then :o)
13:41 brrt joined #moarvm
13:48 btyler joined #moarvm
15:00 * brrt has no idea what i'm doing....
15:00 brrt or rather
15:00 brrt i don't have a good story wrt memory access yet
15:00 brrt not sure whether its necessary
15:01 FROGGS_ memory... what was that again?
15:02 jnthn I forgot.
15:02 jnthn brrt: Can you be more specific?
15:02 brrt basically, i have no way to represent 'i want to access this variable' in the jit graph
15:03 oetiker joined #moarvm
15:03 brrt i want to do clever things with that, but i don't know how to present that yet, because i have no way to declare a 'block' - only single instructions
15:04 oetiker_ joined #moarvm
15:04 brrt which is another way of saying 'my jit graph is much too simplistic'
15:05 brrt and i /want/ simplistic right now
15:06 jnthn I guess you could have an "access this MVM register" node, but won't that typically be a pointer displacement off the register case?
15:06 jnthn *base?
15:07 brrt yes
15:07 brrt it will be
15:08 brrt (for most things)
15:08 jnthn It *may* be worth conveying "it's a register access" lower down, since on less register-starved architectures we can probably keep ->work in a register for fast indexing off, but maybe we can't afford that on x86...
15:09 nwc10 x86_64 overtook x86
15:10 brrt wiki says 64 general-purpose registers
15:10 brrt arm iirc has 32, of which only 8 are available at any given time
15:10 nwc10 (at least, that was for debian installs, a year or so back)
15:10 nwc10 32 bit ARM has 16 mapped in, of which (if you push it as an assembler programmer) you can get to 14 in use
15:11 nwc10 and certainly the C compiler should be able to find 12
15:11 brrt but the ******** thing about x86 registers is that they are /all/ to be considered overwritten after a function call
15:12 nwc10 anyway, my point was "does ia32 matter enough to optimise for *it*?"
15:12 nwc10 I would have thought that x86_64 and ARM (possibly ARM v8) are the two that matter currently
15:12 nwc10 v8 is the new 64 bit thingy
15:12 timotimo do ia32 get sold at all?
15:12 nwc10 v7 and earlier are 32 bit
15:13 brrt probably, yes, but not for consumer electronics anymore
15:13 nwc10 timotimo: I don't know. I'd guess so, for embedded
15:13 nwc10 oh, what he said
15:13 timotimo OK
15:13 timotimo embedded isn't our target at the moment
15:13 timotimo as the empty perl6 program still takes ~123 megabytes of ram ;)
15:13 brrt and if it were we'd target arm
15:13 nwc10 I doubt anyone will be deploying Perl 6 code in revenue earning servers on anything other than x86_64 and ARM
15:13 brrt thats nothing
15:13 brrt :-p
15:13 timotimo and the setting is still going to grow
15:14 brrt don't you want to run perl6 on your 1998 powerpc imac? :-p
15:14 nwc10 there was "optimising" and there was "it runs at all"
15:15 nwc10 jnthn: can you give an arm-wavy number to what you mean by "register starved" ?
15:15 nwc10 in that, I think it may be reasonable to assume that >90% of everyone who matters is no longer register starved
15:16 jnthn nwc10: x86 32-bit
15:16 jnthn nwc10: But agree it matters less and less.
15:17 nwc10 making me think that for now, go with KISS and assume "a sufficiency of registers"
15:17 nwc10 (?)
15:17 jnthn works for me.
15:18 brrt oh, 16 registers on amd64
15:18 brrt actually assuming an /insufficiency/ of registers is simpler :-)
15:19 brrt because it means that every operation is a sequence of load, op, store
15:19 brrt whereas having many registers means you have to wonder where they were last defined etc
15:19 jnthn brrt: Note that you need to do that not only due to register starvation, but also because deopt.
15:20 brrt true, but ultimately only when deopting
15:20 jnthn brrt: Unless you can prove you're in a no-deopt zone. :)
15:20 brrt i.e. i want to move to the situation where we /only/ store (spill) registers when deopt is in effect
15:20 jnthn Sure. But one of the things we have to be most careful with is that when you fall out of optimzied code, enough is in place that the deopt'd code has what it needs.
15:21 jnthn This works for deopt_one (you know where they are and it's local), a bit less so far deopt_all.
15:21 jnthn But those are only really on invokes.
15:21 brrt hmm
15:22 brrt i haven't looked at the deopt ops enough yet
15:22 brrt i'm off for dinner, be back in an hour (or two)
15:22 jnthn k :)
15:23 dalek MoarVM/moar-jit: 2a500f4 | (Bart Wiegmans)++ | src/ (5 files):
15:23 dalek MoarVM/moar-jit: Very simplistic and naive jit graph building. Which is kind of the purpose.
15:23 dalek MoarVM/moar-jit: review: https://github.com/MoarVM/MoarVM/commit/2a500f4a9e
15:24 dalek MoarVM: ba1937b | jnthn++ | / (13 files):
15:24 dalek MoarVM: Resolve "which spesh cand" statically if possible.
15:24 dalek MoarVM:
15:24 dalek MoarVM: This means we don't need to check the guards if we have enough to
15:24 dalek MoarVM: prove they will always be met.
15:24 dalek MoarVM: review: https://github.com/MoarVM/MoarVM/commit/ba1937b003
15:38 timotimo oh cool
15:39 timotimo coming close to inlining it seems?
15:39 timotimo less guards is more better :)
15:40 timotimo can also mean less deopt points later on, eh?
15:40 jnthn true
15:40 [Coke] brrt: no, I never ever ever want to run perl6 on a powerpc imac.
15:41 TimToady what if you place your phone on the imac?
15:45 jnthn timotimo: Well, that's one bit of logic we need for it at least.
15:58 dalek MoarVM: 6b45de6 | jnthn++ | / (6 files):
15:58 dalek MoarVM: Pick a spesh threshold by bytecode size.
15:58 dalek MoarVM:
15:58 dalek MoarVM: This means large things that take more time to spesh will need to do
15:58 dalek MoarVM: more work to prove they are hot.
15:58 dalek MoarVM: review: https://github.com/MoarVM/MoarVM/commit/6b45de6fd6
16:11 FROGGS joined #moarvm
16:35 dalek MoarVM/inline: 25680a8 | jnthn++ | / (4 files):
16:35 dalek MoarVM/inline: Empty stub of can-we-inline check.
16:35 dalek MoarVM/inline: review: https://github.com/MoarVM/MoarVM/commit/25680a8bfc
16:36 jnthn Well, here starts the next round of hard things... :)
16:36 jnthn Dinner first :)
17:44 brrt joined #moarvm
17:50 brrt ok, my MVMJitValue type is pretty much wrong
17:52 brrt i /might/ need to represent all forms of floats and integers but i really need to represent literals, pointers, registers, lexicals first
17:54 japhb_ jnthn, How did you pick those new spesh thresholds?  Are they based on measurements, analysis of algorithmic complexity, gut feeling, ... ?
18:08 * brrt is almost out of energy :-(
18:12 FROGGS :/
18:12 FROGGS brrt: take a walk, that helps :o)
18:22 jnthn japhb_: Gut feeling, with a glance at the spesh_log afterwards, and a hope that somebody will jump in to do some measuring/tuning.
18:28 TimToady could go as far as to have an 'is hot' to spesh it from the getgo
18:29 [Coke] I don't think I'd trust anyone crazy enough to write perl 6 with that power.
18:29 TimToady that leaves out most of us :)
18:30 jnthn Well, the thing we'd really like it to go inlining are operators.
18:30 jnthn And accessors
18:31 jnthn And identity methods
18:31 jnthn All of which are quite tiny :)
18:37 zakharyas joined #moarvm
18:54 brrt joined #moarvm
18:54 brrt yes, it does :-)
19:02 brrt does anyone think it makes a lot of sense to distinguish between lvalues and rvalues? (on the machine level)
19:02 brrt except for literals, both lvalues and rvalues can be the same
19:02 brrt i.e. register, lexical, pointer
19:02 brrt also
19:03 brrt ... are we always guaranteed to be able to resolve lexicals?
19:03 jnthn Depends which lookup up
19:03 timotimo yes
19:03 jnthn The by name ones are very late bound
19:03 timotimo well, yeah
19:03 jnthn The index ones are not.
19:04 jnthn And should always work out.
19:04 timotimo but regular ones are done at compile time already
19:04 jnthn brrt: Is your JitValue thing a bit like a SpeshOperand?
19:04 brrt ehm... kind-of
19:04 jnthn brrt: Except caryring type info too?
19:05 brrt i'm not sure about carrying type info, although probably yes
19:05 brrt (i don't seem to be sure of anything today)
19:05 jnthn Well, I saw a slot in there for what kind of value it was. :)
19:06 brrt true, but ehm... that was only literals, and i've come to the conclusion that literals aren't enough
19:06 jnthn JitInstruction with an array of JitOperands (probably a better name) could work out.
19:06 jnthn In Spesh we don't track the type because we have data on the ops
19:06 jnthn So we know how to interpret the union.
19:08 hoelzro is there a reason certain operations (ie. symlink) are implemented as operations in bytecode instead of simply functions one can call?
19:08 brrt ok. but is the distinction betwee literal / pointer + offset / lexical / register good enough?
19:08 brrt calling functions isn't simple? :-p i don't know. for the jit they're functions
19:09 jnthn hoelzro: Because then you need to design a *second* mechanism for dispatching those built-in functions. Why do that?
19:09 jnthn The interp is already perfectly good at dispatching VM-supported operations.
19:10 hoelzro I dunno, I was just curious =)
19:10 jnthn :)
19:10 jnthn brrt: I'd probably stick to "current frame lexicals" as JITting directly and call a function for outer ones for now...
19:11 jnthn brrt: Though both work, of course.
19:11 jnthn brrt: I'd say pointer and offset are two operands.
19:12 brrt and.. i'd say they that depends on the architecture
19:12 jnthn brrt: And it's the instruction specifying that there's an offset there...
19:12 brrt but fair enough
19:12 jnthn brrt: Well, but the tree is there to translate into any arch
19:12 brrt hmm...
19:12 brrt i guess what i'm saying is
19:13 brrt a 'register' to the jit is just an address from which to load, relative to a base
19:14 brrt similarly, a pointer + offset is just a memory location, hopefully a known memory location
19:14 brrt hmmm
19:14 brrt wait
19:14 brrt and i'd like to have instructions whose addressing translates naturally from moar-level to machine-level
19:15 brrt if i made the offset to the pointer another operand, i'd have to 'know' that, it wouldn't be a single thing anymore
19:15 brrt on the other hand, any actual use of a fixed pointer and offset would indeed be two operands, as they could vary independently
19:15 brrt that doesn't make any sense, what i just said
19:16 brrt ok, i'm wrong
19:16 brrt :-)
19:16 brrt basically if i'm accessing a fixed offset from a fixed pointer, then i'd just need a single pointer
19:17 brrt if pointer or offset can vary, then they need to be two operands
19:18 jnthn In the common case the pointer comes from a CPU register, and the offset is a constant...
19:19 lue joined #moarvm
19:19 TimToady depends on array vs struct
19:19 brrt hmm
19:25 jnthn True, though array access is done inside the REPR funcs at the moment, which we don't yet inline.
19:25 jnthn But yes, later on the array case will matter.
19:27 * TimToady wants efficient arrays of ints/nums to matter very much :)
19:28 jnthn TimToady: Just looking at it from a "what we'll get most value out of JITting in the next month or two" perspective.
19:28 jnthn So long as we don't box ourself in, we're good :)
19:28 jnthn The array repr stuff really will want to learn about multi-dim in the end, I think...
19:35 lue joined #moarvm
19:39 brrt i'm just going to steal ideas, how about that
19:40 jnthn I think that's called "research" :)
19:40 dalek MoarVM/inline: 40179c7 | jnthn++ | src/spesh/inline.c:
19:40 dalek MoarVM/inline: Add inline bytecode size check.
19:40 dalek MoarVM/inline: review: https://github.com/MoarVM/MoarVM/commit/40179c7bad
19:40 dalek MoarVM/inline: 3c0ffaf | jnthn++ | src/spesh/graph. (2 files):
19:40 dalek MoarVM/inline: Make spesh graph build more flexible wrt bytecode.
19:40 dalek MoarVM/inline:
19:40 dalek MoarVM/inline: This is a step towards being able to graph a specialized form of the
19:40 dalek MoarVM/inline: bytecode, which we'll need to do to get a graph to merge while doing
19:40 dalek MoarVM/inline: inlining.
19:40 dalek MoarVM/inline: review: https://github.com/MoarVM/MoarVM/commit/3c0ffaf88c
19:55 dalek MoarVM/inline: 2d9c6ff | jnthn++ | src/spesh/graph. (2 files):
19:55 dalek MoarVM/inline: Decouple handler in graph build from original too.
19:55 dalek MoarVM/inline:
19:55 dalek MoarVM/inline: If building handlers for spesh'd bytecode for inline, we'll need to use
19:55 dalek MoarVM/inline: the fixed up handler addresses.
19:55 dalek MoarVM/inline: review: https://github.com/MoarVM/MoarVM/commit/2d9c6ff6a3
19:55 dalek MoarVM/inline: bd4d7a3 | jnthn++ | src/spesh/ (3 files):
19:55 dalek MoarVM/inline: Build inline graph for specialized bytecode.
19:55 dalek MoarVM/inline: review: https://github.com/MoarVM/MoarVM/commit/bd4d7a3170
20:01 [Coke] test
20:01 [Coke] %9test
20:01 [Coke] whoops, wrong window.
20:02 japhb Though if your test was to determine if you had network access to the IRC server, then you passed.  :-)
20:10 brrt http://wiki.luajit.org/SSA-IR-2.0 is interesting
20:10 brrt also
20:10 brrt they use only a single ir
20:11 dalek MoarVM/inline: 6df0e6e | jnthn++ | src/core/oplist:
20:11 dalek MoarVM/inline: Mark some ops a not suitable to inline.
20:11 dalek MoarVM/inline: review: https://github.com/MoarVM/MoarVM/commit/6df0e6e94a
20:11 dalek MoarVM/inline: e63acc4 | jnthn++ | tools/update_ops.p6:
20:11 dalek MoarVM/inline: Fix oplist parser bug.
20:11 dalek MoarVM/inline:
20:11 dalek MoarVM/inline: An op with no operands but an adverb was mis-parsed.
20:11 dalek MoarVM/inline: review: https://github.com/MoarVM/MoarVM/commit/e63acc4e5c
20:11 dalek MoarVM/inline: 4c53a61 | jnthn++ | src/core/ (2 files):
20:11 dalek MoarVM/inline: Re-generate ops.c, carrying no-inline data.
20:11 dalek MoarVM/inline: review: https://github.com/MoarVM/MoarVM/commit/4c53a61592
20:20 jnthn bbi15
20:35 brrt luajit uses seemingly recursive c preprocessor macro's
20:35 brrt wow
20:43 * jnthn back
20:45 brrt hi :-)
20:46 * jnthn figures he'll nudge preparations for inlining a little further along. :)
20:48 * brrt wonders how you have so much energy
20:49 lizmat I think it's called "being in the flow"
20:50 brrt well then... i'd like to have tha tmore
20:53 brrt anyway.. luajit never has pointers, so doesn't have my issue
20:53 brrt although its perfectly plausible just to copy their design
20:53 brrt (the IR doesn't have pointers, that is)
20:54 brrt i'm starting to see why thats clever
20:56 jnthn :)
20:56 * jnthn wonders if it can be replicated
20:57 brrt hmm... probably not
20:57 brrt or
20:57 brrt hmmm
20:58 brrt some aspects of it can be replicated, i'm sure
21:00 brrt i'm going to sleep again
21:00 brrt see you tomorrow
21:01 brrt left #moarvm
21:03 jnthn 'night o/
21:10 dalek MoarVM/inline: c0813c1 | jnthn++ | src/spesh/inline.c:
21:10 dalek MoarVM/inline: Refuse inline if non-inlinable ops encountered.
21:10 dalek MoarVM/inline: review: https://github.com/MoarVM/MoarVM/commit/c0813c136a
21:10 dalek MoarVM/inline: d502a23 | jnthn++ | src/core/op (2 files):
21:10 dalek MoarVM/inline: For now, capturelex and takeclosure are :noinline.
21:10 dalek MoarVM/inline:
21:10 dalek MoarVM/inline: May be able to relax this later.
21:10 dalek MoarVM/inline: review: https://github.com/MoarVM/MoarVM/commit/d502a23f3b
21:14 lizmat gnight jnthn
21:14 jnthn lizmat: oh, was saying night to brrt :)
21:15 lizmat ah, ok  :-)
21:16 dalek MoarVM/inline: 5e8eb90 | jnthn++ | src/ (3 files):
21:16 dalek MoarVM/inline: Things with free lexicals can't be inlined.
21:16 dalek MoarVM/inline:
21:16 dalek MoarVM/inline: May be able to relax this later with more careful analysis of outer
21:16 dalek MoarVM/inline: relationships; for now keep things simple.
21:16 dalek MoarVM/inline: review: https://github.com/MoarVM/MoarVM/commit/5e8eb901f2
21:35 dalek MoarVM/inline: aeb1f12 | jnthn++ | src/spesh/optimize.c:
21:35 dalek MoarVM/inline: Try building inline graph; don't use it yet.
21:35 dalek MoarVM/inline: review: https://github.com/MoarVM/MoarVM/commit/aeb1f121dc
21:41 jnthn Well, there's the easy initial analysis bits done. :)
21:42 jnthn Need to work on the graph merge next. That'll be trickier. :)
21:47 jnthn (So I'll save it for a new day. :))
23:30 cognominal joined #moarvm

| Channels | #moarvm index | Today | | Search | Google Search | Plain-Text | summary