Documentation as Testing

Can explanation contribute to technology creation?

“If you’re explaining, you’re losing.”

That gem of political wisdom has always been hard for me to take, as, after all, I make my living at explaining technology. I don’t feel like I’m losing. And yet…

It rings true. It’s not that programs and devices shouldn’t need documentation, but rather that documentation is an opportunity to find out just how complex a tool is. The problem is less that documentation writers are losing when they’re explaining, and more that creators of software and devices are losing when they have to settle for “fix in documentation.”

I was delighted last week to hear from Doug Schepers of webplatform.org that they want to “tighten the feedback loop between specification and documentation to make the specifications better.” Documentation means that someone has read and attempted to explain the specification to a broader audience, and the broader audience can then try things out and add their own comments. Writing documentation with that as an explicit goal is a much happier approach than the usual perils of documentation writers, trapped explaining unfixable tools whose creators apparently never gave much thought to explaining them.

It’s not just WebPlatform.org. I’ve praised the Elixir community for similar willingness to listen when people writing documentation (internal or external) report difficulties. When something is hard to explain, there’s usually some elegance missing. Developers writing their own documentation sometimes find it, but it can be easier to see the seams when you aren’t the one creating them.

Is there a way to formalize this? Writing documentation is not – despite javadoc and similar systems – easily automated. Developers aren’t surprised when their tests take time to run, but usually expect them to run in computer time, not human time. Even the tighter loop that WebPlatform.org is creating runs over weeks or months, though it’s effectively a test of specifications, not code.

Donald Knuth’s Literate Programming seemed like the ultimate way to tighten that loop:

Instead of imagining that our main task is to instruct a computer what to do, let us concentrate rather on explaining to human beings what we want a computer to do.

The practitioner of literate programming can be regarded as an essayist, whose main concern is with exposition and excellence of style.

Much as it pains me to doubt Knuth, two problems remain. The first is the level of explanation: this is still explaining what the program should do, not how humans should interact with the program. The second is the question of who does the documentation: the testing aspect tends to produce fewer false moments of happiness when someone other than the creator of a project is the one doing the documentation. Knuth notes that “WEB seems to be specifically for the peculiar breed of people who are called computer scientists,” and the boundaries between experts and more casual users are often what documentation testing can show most vividly.

I love, though, that Knuth considers it a moral question: “I am imposing a moral commitment on everyone who hears the term; surely nobody wants to admit writing an illiterate program.”

Can we get there? Can we make “documented technology” a more powerful phrase than “literate program”? I don’t have an organization available for my own experiments, but I’d love to hear if you have experience with this. Did it improve code or products? Or did it just make developers chafe?

Related

Sign up for the O'Reilly Programming Newsletter to get weekly insight from industry insiders.
  • Eric Elliott

    Very timely. I’m experimenting a lot with self-documenting APIs. See https://github.com/dilvie/jiron which will be produced automatically by https://github.com/dilvie/siren-resource

    Both documentation and test generation are on the roadmap for siren-resource, and from the API consumer’s perspective, a Jiron endpoint is not only self-documenting, it’s a fully functioning program in its own right (like a stripped down management console with basic HTML/CSS styling).

    I have been interested in this stuff for a long time. Early in my career, I worked with RAD systems capable of generating complete API documentation and working code across any class-based object language. Those early systems used primitive technology compared to what we have today, but were much better at code and documentation generation than any system I’ve seen in the last several years. Literally, the diagrams were the code.

    I have high hopes that FBP semantics will mature enough to make even more capable systems available today, with current state of the art technologies. If an FBP diagram isn’t good documentation of how a system works, I don’t know what is. It should also be pretty trivial to create a text manual using an FBP diagram as source material. http://noflojs.org/

    The more complex systems get, and the more complex our interactions with those systems get, the more we’ll need to rely on automation. Think code generation, documentation, DARPA recently even launched a multi-million dollar contest to design systems that can detect and patch software vulnerabilities automatically. http://www.darpa.mil/NewsEvents/Releases/2013/10/22.aspx

    So, in the future, the feedback loop won’t just be short. The specification will be the source of both program code and human documentation. All we need are specification semantics complete enough to produce working implementations with no out-of-band information. For complex systems, maybe that would look like FBP. For API’s, maybe that might look like RAML.

    I for one can’t wait to see how this stuff will develop over the course of the next five years or so…

    • Simon St.Laurent

      I can’t share your enthusiasm, though it’s possible that automated systems will help with some of it. However workable the automation may be creating documentation, that automation seems unlikely to provide the kind of testing that a separate human process can. Over time, that may not matter so much – code generating code generating code may be less enthusiastic about our contribution – but a key part of the value here comes from stepping away from the machine.

      FBP is intriguing in part because of the way it merges documentation with code, but only for certain levels of a project. It still retains black boxes inside of its flow, and I don’t think describing the flow does enough to explain how to interact with a program. It’s a start, but I’m not sure that approach can expand as far as you’d like.

      • Eric Elliott

        There are certainly limits and I didn’t mean to imply that the specification could ever generate all the necessary documentation and tests.

        I’m a big fan of code reviews because they get fresh eyes on a project.

        Somebody tasked with explaining how a system works is likely to discover holes in it and provide valuable feedback to the creator.

  • Jim Fuller

    have you ever seen http://orgmode.org/worg/org-contrib/babel/ org-babel a mode that supplants org-mode … its the closest I’ve seen to literate programming that almost works, though with the obvious impracticality that org-mode is a dependency (and emacs!). Comments and docs in code tend to be 2nd class citizens.

    Literate programming is a laudable goal (and I try to write code that explains what its doing), but we live in a world where speed takes precedence over everything … and many times code that has been well optimized tends to obfuscate over time.

    I’ve always believed that externalizing the ‘meaning’ of a code block’s parse tree representation is fully enabled by using a declarative format; even better if you can migrate as much as the code into mutable (in the MVCC sense) ‘data’.

    Less code and more data … letting the code describe what it wants to do allows underlying mechanisms to obfuscate for speed, parallism, etc.