Perl 6 - the future is here, just unevenly distributed

IRC log for #marpa, 2015-02-15

| Channels | #marpa index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
02:49 ilbot3 joined #marpa
02:49 Topic for #marpa is now Start here: http://savage.net.au/Marpa.html - Pastebin: http://scsys.co.uk:8002/marpa - Jeffrey's Marpa site: http://jeffreykegler.github.io/Marpa-web-site/ - IRC log: http://irclog.perlgeek.de/marpa/today
03:47 Aria Testing displays in docs is awesome. I've seen a few people do that in JS, and it's under-done.
03:47 Aria ... We need a better parser for docs ;-)
05:05 jeffreykegler joined #marpa
05:06 jeffreykegler Aria: thanks
05:06 jeffreykegler Since I have an audience, :-)
05:06 jeffreykegler I give my tips for doing a doc-testing system -- AFAIK nobody else does this so there was nobody to copy.
05:08 jeffreykegler First a non-tip -- my system does *not* use Marpa itself -- it goes back many years, and I did not want to risk the chicken-and-egg problems.  If that ever was a good reason not to use Marpa, it no longer is.
05:10 jeffreykegler Now, the first tip -- I separate formatting for the doc, and formatting for testing, and compare the two -- that way you can more easily work on one at a time, and don't get into issues like the doc fix breaks the test, and vice versa.
05:11 jeffreykegler So, in the POD I force every display to be marked -- the few displays which are not tested are also marked as "ignore"
05:12 jeffreykegler Then in the test suite I mark code which is intended to match a display -- all these marked stretches are named, and the names are used to match them up.
05:13 jeffreykegler There's a special utility which gathers all these marked sections in both POD and test suite, and makes sures that
05:13 jeffreykegler 1.) every POD display is marked, even if "ignore"
05:14 jeffreykegler 2.) All non-ignored displays have a match in the test suite.
05:14 jeffreykegler 3.) and the two do compare equal
05:15 jeffreykegler In the comparisions various parameters allow whitespace to be ignored, for the display to match only part of the test code, etc., etc.
05:16 jeffreykegler And that is how I make sure the code in my POD displays is (almost) never buggy.
05:17 jeffreykegler The big advantage of the above system is that it is mostly done via comments -- the only code is the utility that does the comparison ...
05:17 jeffreykegler and the comparison is never on any critical path, or in a position to break anything else ...
05:18 jeffreykegler fixing the displays can be left until all other issues in a new release are sorted out.
05:43 Aria Very nice.
05:43 hobbs joined #marpa
05:44 jeffreykegler Thanks!
05:45 jeffreykegler Good night.  AFK.
07:59 flaviu joined #marpa
10:11 lwa joined #marpa
10:22 pczarn joined #marpa
11:30 jdurand joined #marpa
11:38 jdurand Re http://irclog.perlgeek.de/marpa/2015-02-15#i_10118215 - maybe worth a look at Test::Inline. I do not know how the POD-marked sections for Test::Inline show in a normal POD output though.
12:13 jeffreykegler joined #marpa
12:14 jeffreykegler jdurand: re http://irclog.perlgeek.de/marpa/2015-02-15#i_10118886 -- comparable, but seem to do something different.
12:15 jeffreykegler It grabs snippets from the code itself and puts them into the test suite -- I found, after trying it, that tying the main, production, code to the displays and/or to the tests to be a bad idea.
12:16 jeffreykegler As I've done it, production code are completely independent from test suite code and the displays ...
12:17 jeffreykegler and the test suite code and the displays are tied together only by a comparison utility.
12:17 jeffreykegler I found mixing these things together does not scale.
12:19 jeffreykegler Btw, note that I also separate POD and code, and for the same reason -- tying them together does not scale well, and IMHO makes both the code and the POD harder to read and harder to maintain.
12:22 jeffreykegler joined #marpa
12:22 jeffreykegler re inline testing, just found this http://programmers.stackexchange.com/questions/188316/is-there-a-reason-that-tests-arent-written-inline-with-the-code-that-they-test
12:23 jeffreykegler The user-selected answer is a good explanation of why I gave up on inline testing.
15:41 sivoais joined #marpa
19:46 jdurand joined #marpa
19:47 jdurand Re http://irclog.perlgeek.de/marpa/2015-02-15#i_10118973 - thx, good reading
21:30 ronsavage joined #marpa
22:29 pczarn joined #marpa
22:50 ronsavage These days I always write a small program, ship it as scripts/synopsis.pl, and copy it into the POD.
23:05 jeffreykegler joined #marpa
23:05 jeffreykegler ronsavage: a very similar approach
23:08 jeffreykegler Btw, it occurs to me that if, instead of saying Marpa's is an Earley's algorithm, with optimized Earley sets, I said instead ...
23:09 jeffreykegler that Marpa is a hyper-optimized packrat parser, the idea might get across to more folks.
23:11 jeffreykegler Also, today -- https://github.com/trizen/language-benchmarks
23:12 jeffreykegler tests of recursion, where Perl does not do well
23:13 jeffreykegler Those of you who've looked at my code, may have noticed there are almost no recursions, and AFAIK no deep ones.
23:13 jeffreykegler I rewrite all recursions, even if it means using an explicit stack at points.
23:14 jeffreykegler I avoided recursions just as much in C as in Perl, even though C recurses as fast as the metal.
23:15 jeffreykegler Because (so I've read) some threading implementations restrict the call stack to a small, fixed size.
23:16 jeffreykegler And when I say Libmarpa runs in all threaded implementations, I want to mean exactly that, as far as can be know.
23:16 jeffreykegler Rewriting recursions *does* make for code that is harder to read, unfortunately.
23:46 ronsavage joined #marpa

| Channels | #marpa index | Today | | Search | Google Search | Plain-Text | summary