End-to-End JavaScript Quality Analysis

Velocity 2013 Speaker Series

The rise of single-page web applications means that front-end developers need to pay attention not only to network transport optimization, but also to rendering and computation performance. With applications written in JavaScript, the language tooling itself has not really caught up with the demand of richer, assorted performance metrics necessary in such a development workflow. Fortunately, some emerging tools are starting to show up that can serve as a stop-gap measure until the browser itself provides the native support for those metrics. I’ll be covering a number in my talk at Velocity next month, but here’s a quick sneak preview of a few.

Code coverage

One important thing that shapes the overall single-page application performance is instrumentation of the application code. The most obvious use-case is for analyzing code coverage, particularly when running unit tests and functional tests. Code that never gets executed during the testing process is an accident waiting to happen. While it is unreasonable to have 100% coverage, having no coverage data at all does not provide a lot of confidence. These days, we are seeing easy-to-use coverage tools such as Istanbul and Blanket.js become widespread, and they work seamlessly with popular test frameworks such as Jasmine, Mocha, Karma, and many others.

Complexity

Instrumented code can be leveraged to perform another type of analysis: run-time scalability. Performance is often measured by the elapsed time, e.g. how long it takes to perform a certain operation. This stopwatch approach only tells half of the story. For example, testing the performance of sorting 10 contacts in 10 ms in an address book application doesn’t tell anything about the complexity of that address book. How will it cope with 100 contacts? 1,000 contacts? Since it is not always practical to carry out a formal analysis on the application code to figure out its complexity, the workaround is to figure out the empirical run-time complexity. In this example, it can be done by instrumenting and monitoring a particular part of the sorting implementation—probably the “swap two entries” function—and watch the behavior with different input sizes.

As JavaScript applications are getting more and more complex, some steps are necessary to keep the code as readable and as understandable as possible. With a tool like JSComplexity, code complexity metrics can be obtained in static analysis steps. Even better, you can track both McCabe’s cyclomatic complexity and Halstead complexity measures of every function over time. This prevents accidental code changes that could be adding more complexity to the code. For the application dashboard or continuous integration panel, these complexity metrics can be visualized using Plato in a few easy steps.

Defensive workflow

An optimal application development workflow does not treat the application performance as an afterthought. Performance is a feature and therefore its metrics must be an integral part of the workflow. This is where a multi-layer defense approach can help a lot. Even if the QA team can perform a series of thorough and intensive tests, a simple smoke test (possibly via a command-line headless web automation such as PhantomJS) can reveal any mistake as early as possible. After all, what is the purpose of hammering the address book application if its sorting feature suddenly becomes unbearably slow?

Defensive workflow can start from the primary tools of the developer: the code editor/IDE and revision control system. Ideally, any programmer’s editing tool must inform the user if there are possible problems in the code, from a simple syntax validation check to other serious problems such as a global variable leak. Modern version control such as Git supports the concept of a pre-commit hook, namely running a specified script before the change is checked in. If the script invokes some tests, upon any failure it can block the check in. This proactive measure will prevent any bad code from sneaking into the code repository.

Complex applications require a different development strategy. Investing some effort to use these emerging JavaScript language tools should hopefully be relatively easy to justify to your team—the positive impact of automated performance metrics will be noticeable once the development team grows from a solo developer to a larger team.

This is one of a series of posts related to the upcoming Velocity conference in Santa Clara, CA (June 18-20). We’ll be highlighting speakers in a variety of ways, from video and email interviews to posts by the speakers themselves.

[adrotate banner=”4″]

tags: , , , , ,