It’s the end of the web as we knew it

You might feel fine.

For the past 15 years, Google has enforced the classic “HTML as foundation” architecture at the heart of the Web. Content creators and the developers who support them had to present content and link information as part of their pages’ HTML if they wanted Google’s spidering bots to see them. Google effectively punished developers who made links or content available only through JavaScript (or images, or CSS), giving them low or non-existent search results.

Google did this to keep their processing simple, not because of a deep fondness for HTML. Even as Google’s bots stuck to a simple diet of HTML, other parts of Google were developing JavaScript-centric approaches, like AngularJS: a “Superheroic JavaScript MVW Framework” that “is what HTML would have been, had it been designed for building web-apps.”

Angular is far from alone. As JavaScript grew, more and more programmers wanted to build their apps as programs, not as pages. Or, as Jen Simmons summarized it at Fluent, “Dang that stupid HTML, I’m just going to load an empty page… then I’ll run the real program, I’ll run the JavaScript.”

As this trend grew, my Twitter feed filled with ads for BromBone and similar services. When Google comes to a site, these services run the JavaScript on the server to create an HTML document, which they then present to Google.

Last week, Google tore down the JavaScript wall, announcing that:

we decided to try to understand pages by executing JavaScript. It’s hard to do that at the scale of the current web, but we decided that it’s worth it. We have been gradually improving how we do this for some time. In the past few months, our indexing system has been rendering a substantial number of web pages more like an average user’s browser with JavaScript turned on.

In some ways, this is just another step in Google’s long path toward seeing the Web the way that browsers see the Web. Google’s extensive work with the Chrome browser and V8 JavaScript engine gives them a toolset that didn’t exist when they first began spidering the Web. In their battles against various kinds of black-hat SEO, Google’s had to look into document structures and supporting files. Given the complexity of some of that, the extra complexity of JavaScript processing may not be that difficult a step.

Most developers aren’t likely to change the way they build sites immediately, of course. Just because Google will now let developers build content and make links through JavaScript doesn’t mean that everyone will rush to rebuild their sites in Angular. Tearing down this wall, though, gives programmers far more freedom to build the Web in a much more program-like fashion.

That leaves me wondering in particular what is to become of hybrid approaches, like isomorphic JavaScript. Performance and maintainability will remain critical issues for web apps, but the SEO that article listed as the first “trouble in paradise” just vanished. “It’s invisible in Google” was a much simpler story than the details of performance and especially the cost of long-term maintenance.

For the past few years, I’ve argued that layered models built on a markup foundation are a better architecture than piles of code. I don’t say “progressive enhancement” very often, but I can’t see the march away from that (and separation of concerns) as helping the Web. Google’s decision to throw open those doors will, I suspect, lead over time to a much more tangled Web that is far harder to maintain.

At the same time, though, I don’t think Google had much choice. The future of the Web clearly includes JavaScript, especially as Web Components become more common. Google notes that “sometimes the JavaScript may be too complex or arcane for us to execute”, and I wonder how far down these rabbit holes Google will want to go.

I’m not certain what the future of the Web will look like, but I know it changed last week. Change may come slowly, but it will come.

tags: , , , , , , , , , , ,