Stop standardizing HTML

It's time for developers to create their own vocabularies

When HTML first appeared, it offered a coherent if limited vocabulary for sharing content on the newly created World Wide Web. Today, after HTML has handed off most of its actual work to other specifications, it’s time to stop worrying about this central core and let developers choose their own markup vocabularies and processing.

When the W3C first formed, it formed around HTML, the core standard of content on the Web, defining the structure, appearance, and behavior of content. Over the next few years, however, it became clear that HTML was doing too much, and the W3C and other groups refactored appearance, behavior, and many semantics into separate specifications:

  • Cascading Style Sheets (CSS) took responsibility for presentation and layout.

  • JavaScript took responsibility for behavior, aided by the Document Object Model (DOM) and a variety of APIs for handling device and multimedia interactions.

  • WAI-ARIA took responsibility for accessibility semantics, ensuring that content remained available to a broad audience even if developers pushed the current boundaries of markup.

It’s not a completely neat separation – some of CSS feels like behavior, and JavaScript can manipulate presentation, for example, but it certainly took a lot of pieces out of HTML. A few aspects of HTML, notably media inclusion, are still mostly handled at the markup level, but most of them aren’t any longer. Forms and linking are both still defined in HTML itself, but aren’t difficult to implement separately.


San Francisco, CA | May 28-30

View the agenda now

When the WHATWG formed, its focus on web application development led it to build a variety of JavaScript tools, which still dominate the HTML5 work. In many ways, ‘HTML5′ has been about adding a few pieces of markup functionality to HTML itself and a tremendous pile of JavaScript APIs to browsers in general.

Browsers still need a lot of work to achieve the visions developers have for a unified distributed application platform, but HTML itself carries less and less of that burden. The former flagship spec may be what comes to mind first when people say “web development,” but the reality is that it has been hollowed out. The heavy lifting happens in the other specifications.

This means that it’s time to free markup semantics from the demands originally placed on HTML. Markup doesn’t become total anarchy: defined markup syntax, both XML’s stricter version and HTML5′s more forgiving version, is still useful to create a parse tree browsers can decorate with CSS and WAI-ARIA, and then manipulate with JavaScript.

HTML itself is still useful – many people and tools know how to read and write it – but there is less and less reason to let the HTML vocabulary be a cage limiting our possibilities. Polyfills emerged as tools for bringing older browsers up to date with the latest specifications. Best practices for using them are still emerging, but they’ve become a more common implementation in frameworks like AngularJS, making it easy for developers to create markup templates that JavaScript will later manipulate.

(Update: I should have mentioned the Web Components spec, which will simplify this work.)

A stronger case for polyfills, though it was actually meant to argue for adding an element to the standard, comes from last year’s picture element saga. The Responsive Images Community Group developed a specification and implemented it (twice) with polyfills. Those developers weren’t happy when the W3C and WHATWG didn’t seem interested – conversations continue – but from my perspective they’d already done great work in creating the polyfills.

Changing the focus away from markup standards is difficult. The Web Standards Project spent a decade or so (with some of my help) evangelizing the need for web standards, and disbanded recently in the face of success. The message that web tools need to be designed by a central organization is, for better or worse, deeply ingrained, but it’s time for markup to fall out of that story.

I’m not quite ready to join Matthew Butterick in proposing “that the W3C be disbanded.” The W3C (and WHATWG) are still doing good work developing the supporting infrastructure of CSS, WAI-ARIA, and JavaScript APIs. ECMA-262 continues their work on JavaScript.

It is well past time, though, for the W3C and the browser vendors to stop talking as if they constrain the markup developers can use and focus instead on the many things they can do to make the browsers supporting that markup processing more capable. HTML’s legacy vocabulary is a great foundation on which developers can build their own toolsets. The Web will benefit, however, from letting developers solve their information problems in their own ways, rather than trying to stuff too many things into a single vocabulary.

It’s hard to stop pursuing your greatest successes, even when they’ve been overshadowed by your other work. It’s time, though.

And if it’s too hard to stop standardizing, perhaps shifting the model, so that standards only codify work already done in polyfills, is an option. Even if centralized markup standards have benefits, let’s evolve them rather than design them.

Related

Sign up for the O'Reilly Programming Newsletter to get weekly insight from industry insiders.
topic: Web Platform
  • http://twitter.com/benjudy Ben Judy

    Mind. Blown. Kinda like the first time someone suggests you can order off-menu in a nice restaurant. I don’t always like what’s on the menu, but to just make up my own order… Seems scary. Shouldn’t this be illegal? I’m not sure what to ask for. Is it OK to just ask for a plain old cheeseburger? Would I look like a fool? And yet… What if…. Should we?

  • Ilya Radchenko

    It’s hard to visualize what you mean without some examples of constrained markup and developer freedom.

    • Simon St.Laurent

      Sure. To take a trivial example, why exactly are we expecting the W3C/WHATWG to tell us what a main element should look like, or to expect there to be one true way that works (well enough) in all cases?

      Why aren’t we creating our own structural markup? There’s no remaining reason to stick with the old forms of or instead of just and . You can style, manage, and annotate XYZ and ZYX just as easily.

      The constraints haven’t been in the browser for a while. They’ve been in our own minds. The first steps into ‘developer freedom’ are likely to be trivial – as trivial as main – but that’s okay. Start structuring your own documents, figure out what structures work best for them, and then come back to the conversation about what additional scripting, styling, or annotation support you might need.

      Picture is a more daring leap because it involves multimedia and that draws more heavily on browser resources, but it was also a great sign of people saying that something is needed, and banding together to build it themselves. We need more of that.

      • http://twitter.com/RichardWentk Richard Wentk

        Markup made sense in 1994, but in 2013 it’s a dated and clumsy concept. HTML/CSS/JS is the worst of all possible worlds – too clumsy to match the page layout and text effects that are taken for granted in old-fashioned (?) print without plenty of unnecessary work, too non-standard to create repeatable experiences in all possible browsers. Polyfills are a band-aid – a very useful band-aid, but still a band-aid.

        Designers want simplicity, consistency, power, and reliability. Markup is a distraction, because customising the syntax of tags or class definitions won’t solve the underlying usability issues.

        If anything, it makes more sense to run one standard virtual machine model with standard hooks for hardware and networking (including sound and graphics) inside all browsers and move to high-efficiency just-in-time compilation using a new best-of-breed dev language.

        This probably won’t happen for a while, because web land still seems to want to believe content is content, and code is code.

        Of course it isn’t, and never really was. But Web 3.0 won’t happen until code knows how to do more with content than render it or db it, and developers start creating apps that assume content and code can be anywhere but *will* run in every browser (subject to security), APIs are (mostly) open and free, and collaborative API building is built into the dev process, instead of being bolted on via projects like github.

        • Simon St.Laurent

          If you’re not fond of markup in the first place, then yes, this is unlikely to be appealing. You may agree with the headline – “Stop standardizing HTML” – for completely different reasons.

          The growing power and consistency of virtual machines certainly opens new possibilities, and it may well be time for a collapse back to simpler approaches. I have a hard time imagining a world of shared virtual machines on the client, but I’ve been wrong before.

          In the meantime, however, this seems like a path by which the markup universe can evolve, helping untangle many of our currently overgrown pieces.

        • Damian Yerrick

          I imagine that processing the virtual machine is far more CPU-intensive than processing markup. If you switch HTML from a markup model to a virtual machine model, how will automated user agents such as web search engines extract information from the page? Is that what the WAI-ARIA part is about?

          • http://twitter.com/RichardWentk Richard Wentk

            Firstly, accessibility isn’t served by current technologies. WAI-ARIA is bolting accessibility onto something that wasn’t designed for it, and adding yet another level of authoring complexity which – while we’d like to hope otherwise – I’d guess is simply going to be ignored by many developers who are already struggling to support all current platforms and devices.

            (Anecdote – I recently tried to buy tickets for a show at one of the UK’s biggest venues. The code *did not work* in two out of three popular browsers. If the devs for a major site can’t get it to work for two thirds of their users, how likely are they to get accessibility working?)

            With a clean code-based browser, accessibility could potentially fall naturally out of a browser rendering mode. It would *just work”, with little or no extra effort.

            As for efficiency – it’s far more efficient to extract data directly via an API than it is to download a page with content, remove all the markup that surrounds it, and then do something useful with whatever is left.

            This applies to the DOM itself. Consider that (e.g) jquery has to spider its way through a node tree every time it tries to find an element, and you get a lot of wasted cycles during rendering that could be avoided with a cleaner design that made it possible for devs to define their own maximally efficient data structures.

            Besides, by the time you’ve used PHP/Javascript/Flash/Ruby/SQL/etc you’re already doing a *lot* of processing to make your page appear.

            Point is, content – and markup – aren’t static and haven’t been for a long while now. Most sites generate content on the fly using some combination of these and other technologies.

            But the technologies evolved in a random way, and they use different coding schemes, different interfaces, different design patterns, and different fundamental philosophies.

            Perhaps this isn’t a clean or efficient way to do this.

            Is it unrealistic to wonder if it’s possible to simplify the process, make it more consistent and reliable, while also opening it up for completely new kinds of server server and smart browser applications?

          • Luiz Felipe

            I don’t think that virtual machine is more CPU intensive, its the other way around, it is less intensive because we have virtualization instructions on CPU and don’t have “markup” instructions on it, and I always thanks Intel for that.
            And if web servers just serves data, then we can construct an truly web.
            Its web search engines problem that data will come on random format, after all, its data. Why is must “prostitute” my data as HTML only to Google understand it.
            Why my data cant be just text, or xml, or json.

  • Peter Rushforth

    I agree with this in general, but browsers as we know them have a long way to go before they will be the markup agnostic programming platform you are suggesting they could be. They don’t use web architecture in a sensible way as it stands.

    For instance, when my browser sees this: , it does this:

    GET http://example.com/resource-that-has-two-representations-one-json-and-one-jpeg
    Accept: */*

    Different preferences from different browsers, but none are perfect from what I’ve seen, and certainly no two are the same.

    How does my browser know that it will get an image back if it requests “anything you’ve got”? Same for scripts, etc etc etc. That server (it doesn’t belong to me) is set up to guess that the default representation is json.

    If any non-standard markup is allowed, providing better application/request behavior would be job 1. Only then will browsers become a viable programming platform for the Web. Maybe what’s needed is a whole new species of browser.

    • Simon St.Laurent

      These are the right kinds of questions.

      Working with external resources is definitely the trickiest part of the question. Structuring content that’s included within the document is not at all difficult, but external resources receive a lot more attention from the browser and the underlying protocols. Those protocols seem to be getting even more complicated in the age of SPDY/HTTP 2.0, as developers strive to (over?) optimize handling. Developing a more flexible image format is another option under consideration. In the long run I won’t be entirely surprised if manifest files come into it.

      HTTP’s content negotiation techniques, unfortunately, are not exactly honored. They could certainly help in these cases – or to put it more precisely, this approach might let us finally include the extra information with our links that would make them usable!

      “A whole new species of browser” is appealing, but I don’t think it’s either realistic or necessary. The current browsers aren’t perfect, but they’re close enough to handle most of this work, and could (I think) evolve with the rest of it.

  • http://twitter.com/dylanw Dylan Wilbanks

    I’m not sure I’m ready to sign on.

    Polyfills (and their web frameworks) aren’t a be-all end-all solution, especially if you’re running a site that taxes the VM and rendering engines with their code already. I can tell you in my job at an enterprise SaaS company that we slam hard into the limits of the JS VM already. We had long and hard debates about using jQuery because of how it could push the site to unacceptable perf limits, and I had to outright ban LESS for similar reasons. Polyfills aren’t just about JS, of course, but I look at JS with a jaundiced eye now that I have seen what happens on the outer boundaries of performance.

    We did try opening the vocabulary up before — remember XML/XSLT/XHTML? Its failure ultimately came down to a failure to understand the power of an open vocabulary with a strict grammar. But if we do as you suggest and walk away from standards, who governs the grammar? How will we deal with competing codebases, contradictory rules, and a need for backwards compatibility (especially in the enterprise space)? I started as a web designer/developer in the days of IE and Netscape where sub-point versions often shared little with their previous iteration. And we’re getting back there now with the multiple form factors and browsers we have to support. I don’t want to go back to that world.

    And my last worry — which is probably more about Butterick’s desire for a FOSS driven web stack than anything you offered — is who will watch over the 20% side of the 80-20 equation? Accessibility is in that 20%, for example, and we saw how Hixie’s quasi-80/20 attitude towards accessibility required intervention from the W3C (to his anger). Not every user has a voice in an 80/20 FOSS world, much less the ability to extend the codebase for their own needs. But if the fundamental thrust of the Web has been that it’s open, extensible, and available to everyone, then we have to have some sort of governance to ensure that that remains so.

    So, I’m not ready to let go of the ossified, vendor-centered W3C just yet. Nothing better has replaced them. I can see beyond W3C, to a place where a board governs the central trunk line of the web stack and groups commit into it — with clear rules and patterns for designing and writing these standards — but that’s neither what W3C nor WHATWG are capable of. They serve their purposes for right now, a pragmatic hegemony. To move beyond would require something beyond what we can — and want — to do right now with the web.

    • Simon St.Laurent

      I think you’ve read more of Butterick into this piece than is actually there.

      The grammar still comes from (standard) XML and HTML5 parsing rules. The accessibility handling still comes from (standard) WAI-ARIA. The programming still comes through (standard) JavaScript and the DOM. (Standard) CSS still works just fine. All of those can stay with “the ossified, vendor-centered W3C” (and WHATWG and ECMA), and help control the codebase issues.

      The only piece I’m proposing freeing (here) is the actual vocabulary. The 80/20 on that is less about accessibility and more about handling complex binary (multimedia) content. I’m not even proposing throwing HTML away, just saying it’s time to stop letting it limit our vocabulary choices.

      On JavaScript performance, I agree – it can suck. I would suggest, though, that a lot of that suck comes from the way we typically apply frameworks. Most developers – myself included – incorporate whole libraries at a time. Even small libraries can take up lots of time when they get called all of the time, typically because they’re watching too much.

      The polyfill approach (unless of course it’s part of a larger framework…) reduces those impacts. There is setup time at load, which CSS selectors have eased. There are (probably) events to watch, but those won’t go away. Complex polyfills may do complex processing, but that tends to come with the work.

      I agree that there are a lot of JavaScript performance issues. If this approach is just piled on top of the existing stack of libraries, there will of course be problems. My hope, though, is that this approach will let us restructure our code to fit our documents and interfaces better, and help us out of the existing tangle.

      In some sense, XML tried to open up the vocabulary, but it came with cultural baggage – schemas, DTDs, etc. – that made that opening a very small one. Most of what web developers actually saw of XHTML was the enforced syntax discipline and the strange new world of XSLT, not things that felt like “freedom”. There’s a larger parallel conversation to be
      had on the XML side of technology, but it’s time to learn from and move beyond that failure.

  • zwetan kjukov

    I got one word for you: SGML

    • Wolowizard

      SGML is not one word, its four.

      p.s. Everyone needs to stop bitching about standards. Do you think that we can get thousands of developers to agree on one thing? With competing ideas/businesses?

      If no one broke standards or morphed peoples interpretation of what that standard may be. You would have no creativity or innovative ideas. Its all about breaking standards and making new ones.

  • FF222

    Smells like Google shill.

  • http://www.facebook.com/shaunpatterson Shaun Patterson

    Umm no. Standardize more. This is by far the most idiotic thing I’ve read in a while. Do you work for Microsoft?

    • giaskaylee

      You know, if this post came around a few months ago, I might as well label the author as a crazy pagan who’s in need of an urgent dose of enlightenment or redemption to be converted back into the modern civilization now immediately so nobody gets hurt or injured. But now that I’ve been introduced to the really mind-blowing AngularJS, which demonstrated very successfully how writing your view logic DECLARATIVELY with custom HTML tags and attributes can make your application code way cleaner and more concise, with your SLoC potentially cut by 50%, what the author is advocating here doesn’t seem so ridiculous at all.

      And, since people are already using normalize.css or reset.css nowadays to strip off all the default styles set by browsers, it is no doubt really that what’s left of HTML in these days is but a mere shell, just like Simon has described.

      All in all, I think it is still beneficial to keep a few elements that’s tightly coupled with the structural aspect of HTML in a standardized manner, eg. <nav>, <section>, <h1>, overall – elements that’s involved in creating HTML5 document outline. But the standards shouldn’t restrict or illegalize the creation of new elements and custom attributes.

    • http://www.facebook.com/tec.goblin Tec Goblin

      No, he probably works for Google :P (see angular.js references etc) – which more and more creates browser-specific “standards”.

  • http://beshr.com Beshr Kayali

    I think AngularJS is built with that in mind! Very interesting…

  • Eric Bidelman

    Your concerns are almost entirely addressed by Web Components [1] and Custom Elements.

    The primitives [2] being added to the web platform allow authors to define their own DOM structure (hidden from regular view) and encapsulate that into a custom HTML element. If necessary, one can even extend pre-existing HTML elements. If <button> doen’t suite your needs, create <my-megabutton>.

    HTML/DOM is for presentation, CSS is for design the look and feel, and JS provides the functionality. Fact is, the demands of web applications today are different than those of the past. JavaScript is a necessary companion for building an app and I’m not sure we can get away from this triumvirate.

    That said, the reason frameworks like Ember, Angular, Knockout,… exist today is because the
    underlying platform is not capable enough. However, that is changing quickly as browsers start
    to implement these new technologies. Batman can do more with a bigger utility belt!

    [1]: http://html5-demos.appspot.com/static/webcomponents/index.html
    [2]: http://www.w3.org/standards/techs/components#w3c_all

    • Simon St.Laurent

      I agree, and I look forward to having the bigger toolbelt. Web Components will clean things up substantially. I just look forward to leaving the “HTML must be a vocabulary defined by committee” baggage behind.

  • Mike C

    Brilliant! I can’t wait to see pages written like this!

    *br { meaning: }
    *p { meaning: }

    These are

    two paragraphs!

    And this comes after a line break!

    • Mike C

      Ugh, of course the comment system totally mangled my code example. Oh well. Maybe in the new era this kind of bug will be fixed. :)

  • APC

    What’s the problem you’re trying to solve? I read through the article a couple of times and the best I can come up with is that you don’t like how HTML is defined but you don’t explain why.

    The example you give below, basically removing the word div, seems to add more work.

    The beauty of HTML is that it is human readable; it has a well defined, agreed upon structure that allows for anyone to read it and understand it. Is your idea to throw that out because you don’t like how it’s defined?

    • Simon St.Laurent

      I think it’s time to open the door to a lot more experiments. There just isn’t much reason any longer to say “the tags you use in HTML5 must be agreed upon by this committee.” The HTML process halted, then crawled, then lurched, and most of the new useful parts might bear the name of HTML but are really JavaScript APIs. We’ve hollowed out HTML so severely that it’s time to treat it like the shell it’s been for some time.

      There’s a strangely pervasive fear of anarchy, that suddenly documents will become impossible to share. I tend to suspect that it will go the other way – picture, for example, is in many ways an improvement on img, even before you get into questions of responsive design. Localization to different languages might scare people, but I’m not especially worried that markup in multiple languages is that much worse than content in multiple languages.

      It’s hard to tell people who are completely used to living in a comfortably furnished house that it’s time to go outside – it probably doesn’t “solve a problem”. However, there is a lot more out there to see and do.

      • APC

        Then go out and build a browser that will parse the experimental code you want to try. If it’s any good the ideas will move forward.

        • Simon St.Laurent

          I don’t have to build a browser to do it. Existing browsers already do it. The only genuinely difficult cases are around multimedia. Web Components will be here shortly to make it feel prettier.

          Walking out of the comfortably furnished house at this point basically means opening the door.

      • has

        Actually the *simplest, most valuable* improvement HTML5 could’ve made would’ve been to use XHTML1.0, not HTML4, as its base. While I’m no special fan of XML or SGML markups in general, that one modest rationalization of the whole HTML standard would’ve wiped out much of the wibbly-wobbly knotty inconsistent SGML-ness that makes HTML such an unnecessarily complex PITA to author, proof-check and parse, and would make it easy for any number of non-HTML extensions to be easily inserted into any modern or future HTML document simply by employing the appropriate namespacing.

        Unfortunately, the folk behind XHTML were so buoyed by their excitement at becoming a real proper W3C standard that they immediately disappeared up their own asses trying to make XHTML2.0 the Finest Markups System Evar Designed!!!!1!1!!! In the process, they completely forgot the original and infinitely more modest (and attainable) objective of squaring away some of the redundant mess caused by well-intentioned-but-not-so-wise-in-hindsight aspects of the original ‘flexible & forgiving’ HTML standards, which make it so horribly bad for scaling up and extending today.

        And then, of course, XHTML2.0 was such a bizarre alien boondoggle, *everybody* got utterly spooked at the thought of XHTML determining the future of *anything*. Which effectively and instantly relegated it to eternal self-congratulatory academic exercise for XML fabulists only, forever locked safely away in its own private ivory tower where it can never harm anyone else.

        Oh well, better luck when HTML6 rolls around…..

  • JohnnyX67

    Warning: blasphemy ahead

    Ideally, Javascript should be used rarely if at all. The direction should be XML XSLT CSS3 XFORMS

    That execution of random code is required is appalling. It is a vector. It is an imposition on users. Javascript should be limited to extraordinary use cases. Heresy I know. Layout, routine behaviors, etc should all be declarative. Javascript is a crutch necessitated by history and misguided browser design POV. Much of the technology is already in browsers, it needs to be surfaced properly.

    • J David Eisenberg

      I like this idea, with one caveat. One of the big factors in the growth of the Web was the fact that HTML was easy to write; everyone could create a web page. To some extent, JavaScript also fits that description — easy to learn the basics; everyone can write a script that makes their pages dynamic. XSLT has a fairly steep learning curve and just *looks* forbidding. Which other declarative language might provide an easy entry path?

      • JohnnyX67

        It can be tough to get a handle on XSLT. No question.

        I found this tutorial helpful http://www.ibm.com/developerworks/library/x-hands-on-xsl/

        I thought the physical exercise, cutting & folding the paper etc, “beneath me” – that I could do it in my head. Well, don’t waste time with such thoughts! Do the exercise as described in the tutorial, you’ll have the “a-ha” moment.

        That being said, various books are by my side at all times when working with XSLT.

        As far as javascript and dynamic pages, how much of that dynamic behavior is typical UI behavior? Could that routine behavior be described declaratively (in the ideal world)? I think that should be the case.

        Aside from UI widgets what else is dynamic? Data retrieved from the server, a table lookup of codes, part#, etc. Tasks accomplished via AJAX. Perhaps there’s room for javascript to live on in this role. Thoughts?

      • William Cleveland

        I think the biggest thing XSLT needs is just an alternate syntax, that doesn’t insist on forcing the code into XML. It would be nice if we could send that all the way to the browser and have it supported, but a CoffeeScript-style approach wouldn’t be too bad.

        That does assume that the data source is still XML, though.

  • megaboz

    I’m relatively new to web app development (having mostly done Windows based applications) and this may be a crazy idea and may only apply to a certain subset of web apps, but why bother with HTML at all? Abstract it all away by developing a Javascript engine that generates all of the HTML for you in an application-in-a-page style, based on an object oriented model that exists on the server (mirrored on the client by the Javascript engine), styled by CSS, and passing data back and forth using JSON.

    Application developers develop in an OO environment (where for data-intensive applications, your business rules could exist and you connect your UI to the database backend) but don’t need to necessarily know HTML, Javascript, or CSS. They can of course, use HTML/Javascript to extend the framework, but if the framework suffices for the types of applications they need to develop, they never have to touch the HTML/Javascript, and only need a designer to style/theme the application using CSS.

    • Simon St.Laurent

      You absolutely can do that, and a lot of people do. I avoid it personally – it’s really hard to know what’s happening in the app unless you know its internals very well. A lot of frameworks are pushing that direction, and I’m expecting it’ll continue to grow as people who think more in terms of programming code than markup come over to web development.

    • http://twitter.com/AliceWonder32 Alice Wonder

      This is what I do but server side with php DOMDocument class. I have a library of functions that generate DOM content or manipulate DOM content for me in an object oriented fashion. When I have need to do it client side, jQuery is actually pretty good at that.

  • Matthew Vaughan

    I actually really like this idea, have wanted it for some time, at least in limited scope.

    But since modern browsers already support it (treating an unknown element as a div which can be styled with css) and there’s a polyfill (shiv) for older browsers, no change is needed.

    Of course it won’t validate as “standard” html, but aside from that there’s no reason (other than possible conflict with future elements defined through standards, so namespacing them might be a good idea) not to use whatever element names you feel like right now.

    • Matthew Vaughan

      Oh, of course such elements won’t supply new behavior, they’d have to be polyfilled with js etc. to do anything, but that can serve as a prototype and if enough people use it, browser vendors could consider adding it as a native functionality.

  • http://twitter.com/mbutterick Matthew Butterick

    Lurking behind Simon’s suggestion that we liberate HTML is a more consequential question: whether the W3C’s approach to developing standards will meet the needs of the web for the next 20 years. Now that we’ve had nearly 20 years of experience with the W3C, that conversation can be — indeed, must be — based on facts and experience, not wishful thinking.

    Simon, you don’t agree with me that the W3C should be disbanded. OK, fine. But something has to change, doesn’t it? Otherwise, we’ll be doing the same thing, but expecting different results. Which is madness.

    The urgency has grown recently. For a long time, the web really had no competition. Now it does. Developers and users are migrating rapidly to app platforms like iOS and Android, and media platforms like the Kindle. These platforms have been successful in part because they’re strong in areas where the web is weak (for instance, offering consistently implemented standards and a better financial model).

    Certainly I disagree with the view that the W3C’s way of doing things is the only way. People and companies come together on standards when it’s in their rational interest to do so. There’s no magic to it. It happens all the time in the technology industry, and usually without a formal consortium.

    Open source is one model. If we’ve learned anything in the first 20 years of the web, it’s that the hand that codes the browser rules the world. And a standard is only as good as its worst implementation. Open source has shown that it can solve a lot of these problems by removing conflicts of interest from the process. It has also shown that “benevolent dictators” can accomplish more, faster, than the Model U.N. approach favored by the W3C.

    Surely we can do better. And if we can do better, surely we must. Otherwise, information will continue to migrate off the open web, back into the walled gardens and marketplaces controlled by corporations, and we won’t end up with “universal access to a large universe of documents.” We’ll just end up with the biggest SkyMall catalog in the universe.

    • Simon St.Laurent

      All of this sounds good to me – and to be fair, a lot of the issues you raised come from CSS, not HTML. I see ‘de-standardizing” HTML as the easiest place to start, a place where web folk can get used to the idea of innovating on their own.

      I’m not exactly surprised by the volume of “but everything must be standardized” comment on this telling, but, uh, yeah – the vehemence is pretty amazing. Migrating to models that look more like open source is going to require convincing people that these things can be open.

      Hopefully that leads to places we both – and others too – will like!

      (Given the volume of open source code used on the Web, it doesn’t seem like it should be this hard, does it? But it is.)

  • http://twitter.com/whudat whudat

    How would any browser know what to do with tags you’ve created for your own convenience?

    • Simon St.Laurent

      You just tell the browser what to do with those tags using CSS for style, JavaScript for behavior, and WAI-ARIA for accessibility semantics. That’s all there already.

  • has

    Here’s a slightly bolder suggestion: how about everyone stop fiddling and futzing with HTML/JS/CSS standards for five minutes (‘pathologically myopic turd-polishing’ is the phrase that most comes to mind) and, just as an experiment, try throwing both HTML and all today’s so-called web browsers under a bus to see how it feels. Everybody has gotten so utterly conditioned to think that WWW ≡ HTML and HTML ≡ WWW, nobody even considers the possibly that maybe they’ve all prematurely dived down one particular rabbit hole. Instead of continuing to rummage about in the dirt a bit, maybe it’s time to climb right back out to the surface and look around to see what else might be being missed?

    The web was intended and designed for the free exchange of user-authored documents of every shape and kind – the only common characteristics being the inclusion of hyperlinks to connect them together. HTTP defined verbs for moving documents around (GET, POST, PUT, DELETE; though it could do with PATCH, MOVE and COPY too), along with content negotiation headers that allowed any client to negotiate with the server to determine which mutually agreeable format was most appropriate when GETting a representation of that document (e.g. a document might be published in Word format, but through the magic of server-based transcoders served as a PDF or RTF or plain text or – heck – maybe even HTML representation if that’s what floats a client’s boat).

    IOW, the whole point of the Web’s original architecture was to avoid *exactly* all these incessant, insane Debates about which is The One True Format that all web clients and servers shall communicate by. But instead of preserving and enhancing this loose, flexible coupling, popular web development culture and tools has gone in exactly the opposite direction: embracing ultra-tight coupling that dictates *exactly* which dialect (HTML) all clients and all servers must speak in order to achieve anything, and then stacking hundred-ton weights on top just to be sure. It’s like every single developer utterly forgot their ‘Abstraction 101: Coupling & Cohesion’ lessons the first moment Mosaic dazzled them with a little bit of shiny, and have never remembered them since.

    Interacting with today’s web is like sucking through one-way straws sold at considerable expense to the great mass of unwitting users at great expense by an specialist elite of Professional Straw Vendors from the highly exclusive Professional Straw Vendors Union. The modern web browser, for all its fancy geegaws, is still a thick, dirty, semi-opaque black-and-white 1950 TV screen about three inches wide; but because it’s the only sort of TV set that the vast majority of us have ever experienced, we naturally believe that is The Way Things Are Supposed To be. The irony is, the original design for the WWW was actually speccing a 50″ six-color 3D high-def ultra-thin plasma screen; it’s just that nobody thought to build the damn thing before the clumsy archaic CRT vendors got a lock on the market.

    Here’s a hint at how things could, should be: imagine the web as a seamless extension of the user’s desktop, and vice-versa, and instead of ‘uploading’ and ‘downloading’ local files via a web browser just automatically replicate and sync chosen documents between desktop and web server. Equip web servers with a good set of transcoders for popular file formats, and teach all document reader and viewer applications to understand and employ hyperlinks and content negotiation as first-class features. There’s no reason anyone should have to go via dedicated ‘web browsers’ application in order to interact with the web; instead, downgrade their status to what they actually are: viewer applications for HTML documents specifically. Rectify the gross, global dysfunctionality in the Web as a whole, and maybe the comparatively modest brain damage within HTML will eventually heal on its own accord.

    The real challenge here isn’t in going back to the drawing board with a fresh perspective on how the web *should* work. It’ll be figuring out how to wrangle the large number of professional and amateur web developers, all of whom have sunk vast amounts of learning, work and ego in the current status quo. Those folks will instinctively fight tooth and nail to defend that investment and the exclusive privileges that come with it, most likely lynching anyone who even suggests tearing the whole crazy stack down and starting over. And that, I fear, is a much tougher problem than figuring out how to make the Web work sanely. :/

    • http://twitter.com/mbutterick Matthew Butterick

      I followed your argument up to the point of “imagine the web as a seamless extension of the user’s desktop” and “downgrade [web browsers] to what they actually are: viewer applications for HTML documents.” Is this not already the case? Seems like nearly every desktop application today includes some set of Internet-connected functions. And no one would debate that today’s web browsers are primarily HTML viewers (notwithstanding things like Google Docs out on the edges).

      As for “tearing the whole crazy stack down and starting over,” are you aware of any person or group that has put forward a plausible proposal for a replacement? Genuinely curious.

      • has

        “Seems like nearly every desktop application today includes some set of Internet-connected functions.”

        A proprietary client app talking via some proprietary [pseudo-]protocol to the same vendor’s proprietary server app isn’t the same thing as an unencumbered platform for enabling free bi-directional sharing of arbitrary documents between ordinary users.

        A silo is a silo, regardless of whether it’s architected as a custom HTML-based server-side application which any dumb generic HTML client can talk to, or as a fully distributed application consisting of smart desktop/mobile front-ends talking to smart server-based back-ends. Either way, you’re still creating an artificially high, restrictive and/or proprietary barrier to any user who wishes to share documents. To publish their work openly, they either have to learn a crapload of hoary technologies like HTML+JS+CSS+LAMP in order to code/customize/host their own dedicated ‘web application’ or else they must entrust their work to an existing proprietary system run by a third-party service vendor.

        That’s a far cry from just going into your existing file manager, right-clicking a PDF document’s icon and selecting the ‘Share With…’ option that makes the document immediately visible to ‘Friends & Family’. That’s an incredibly simple yet powerful interaction model, but for it to work seamlessly you need the fundamental openness and inherent simplicity of the Web as it was originally designed, as behind the scenes the following operations must be performed:

        1. The document must be encrypted as needed and, where appropriate, tagged with contact info and keys for each individual who is allowed to access it, along with any other pertinent info, such as information about the document’s origins.

        2. The document must be transferred to one or more web servers that will be visible to those individuals. Depending on document sensitivity and/or parsimony, this might be an intranet-based server behind a corporate firewall (e.g. document is for certain employees’ eyes only), or a public server belonging to the user’s ISP, or a whole bunch of public servers operating as a distributed load-balanced cluster or P2P network.

        3. Where appropriate, the server should dispatch automatic notifications to everyone on the document’s access list, allowing those users’ devices to inform their users, copy/sync the document locally, open it in the relevant application, etc.

        The whole expert system must make all these sorts of decisions automatically, based on its knowledge of who the document is intended for (author only, friends or colleagues, the whole world). This would in turn be powered by other services such as the user’s address book, plus whatever general or custom rules the user has set up to assist the automated decision process. For example, the user’s address book would used both to create the ‘Friends & Family’, ‘Work Colleagues’ and other entries in the file manager’s ‘Share With…’ menu option, and to supply whatever other details (relationships, contact info) are needed to determine where the document should be hosted and who should be notified. The ability to set up standard and/or custom rules provides both safety and convenience: personal snaps might be subject to a rule that normally limits their scope to friends and family unless otherwise indicated; naughty pics pilfered from ‘Big-n-Bouncy’ would not leave the privacy of your own devices; frothy screeds on What’s Wrong With The Web would instantly be visible to the whole world; etc.

        It’s quite a different interaction model to what we have today. Currently we deal with lots of complexity and formal and ad-hoc processes, much of which occurs within many fragmented unrelated content-specific silos. For example, we’ve a couple silos for sharing fast and slow discussions (chat and email), another silo for sharing our photos (a proprietary HTML-based online service like Flickr or Tumblr, which might or might not have a degree of integration with our iPhoto application’s local silo), yet another silo for sharing arbitrary documents (e.g. YouSendIt, or FTP if you’re old-school). Any one of which more or less fulfills its own narrowly stated purpose, but seen as a whole are as far from a coherent, integrated, streamlined interaction model as you can imagine: each one works very differently, so adds its own learning curve, and none of them cooperate well (if at all) with each other, assuming it’s even in their own vendors’ interests to try.

        Yes, a user can just about obtain all the functionality they need from the current arrangement, and as long as you’re operating solely within a single silo the individual experience might even be quite pleasant. But ultimately the user is working according to the vendors’ rules, not their own.

        For example, if I’m having an email discussion and I want to let the other person see some photos I took this morning, either I have to attach those photos to the email or else upload them to my chosen online photo host and paste the URLs into the email. Or some way into a rapid-fire chat-style discussion I decide to switch to a more considered long-letter approach, I have to drop out of the chat client, fire up my email client, paste in addresses, type, send, wait, etc, etc. And I have to keep performing all these different modal switches for all types of documents and all styles of interaction, *and* I have to do so in the face of rapidly rising information load as the number of people in my address book and the amount of data on my system continue to rocket ever upwards.

        Such working practices simply do not scale well; eventually I’ll get buried by it or throw myself at the mercy of a Facebook or some other proprietary party who promises to manage and simplify that load for me. It’s certainly as far from the original vision of the Web operating as a single, consistent interaction model allowing anyone to publish whatever they like in whatever form they like, just as long as it’s all tied together through hyperlinks.

        “As for “tearing the whole crazy stack down and starting over,” are you
        aware of any person or group that has put forward a plausible proposal
        for a replacement?”

        First, my whole argument is *not* to ‘replace’, but to *correct*. Everybody wants to be the rich famous hero who invents the Next Big Thing and Saves The Day. Nobody wants to be the dirt-caked shit shoveler who quietly and unglamorously fixes and clears all the existing sewers so they can finally flow as intended. However, ‘replacing’ is impossible: the cost of total change is utterly prohibitive to everyone, so the first group simply aren’t operating within reality. More to the point, there is no need to ‘replace’: the underlying technologies (TCP, UDP, HTTP) and basic interaction model (REST) are essentially sound, and while they may require some tweaks to eventually realize their full potential, most of the task is just making sure everyone is using them as they were actually designed to be used, rather than according to the myriad messed-up misconceptions and misunderstandings that currently exist in 99.9% of web developers’ minds. But like I say, clearing away all that crap is a hard and largely thankless task, so not particularly appealing to anyone, even though it increasingly needs to be done before the drains overflow completely.

        Second, as to who might be working on this sorting-out today… Offhand? I could not say; not without a whole lot of rummaging to see if I can locate the 0.1% of sanity that might be buried amongst the 99.9% of misinformation and mess. There was the folks who designed the Web in the first place, before Mosaic came along and turned it into an HTML-producers-only club. They started with the right general idea, but their voices have long since been drowned by the insane racket of a million web developers who have completely misinterpreted their original message and are far too busy shouting their own story to care. There may be many useful HCI insights which could be gleaned from various bespoke “let’s invent the Web’s replacement” projects like Curl and Wave, but in terms of actually replacing the Web they are living in La-La Land. Big commercial entities like Google, MS, Apple etc. are too busy monetizing the current mess to want to undertake a unilateral cleanup. FOSS could, if it wanted, build the basic technology infrastructure, but largely lack the resources to do novel or original R&D themselves, preferring to blindly copy and commoditize what the big commercial vendors are already doing. There may be some pure R&D quietly going on in other venues: research universities, institutes like PARC, even lone individuals who’ve gotten tired of waiting and decided to roll up their sleeves and have a go; who knows? Heck, even I’ve spent a bit of time getting down and dirty with HTTP, enough to realize the basic mechanics I describe are not that hard (security and crypto services are probably the toughest, but you wouldn’t require that for a working proof-of-concept), but I don’t have the time or resources to build out a complete model (though I may in time produce useful bits and pieces as a side-effect to my regular work which I’ll be happy to open source so that others might run further with it).

        Sorry, I know that’s not the most productive response to that question, but if I *knew* everything was all safely in hand and the Solution already well on its way I wouldn’t be wandering the Web writing these sad lonely indigent screeds about it.

        • has

          (Damn, Disqus ate my extra line spacing and -grins- symbol… Pain!)

    • Cybernaut

      I agree with the sentiment about throwing HTML and Javascript under a bus. It’s been tried before with effectiveness but sadly nobody was paying much attention. Check out MIT’s CURL project:

      http://www.wired.com/science/discoveries/news/2001/12/48818?currentPage=all

      • has

        Curl creates a tunnel within a tunnel: this is not the answer. My argument is to break down the existing tunnel’s hegemony, not embed yet another one within it. The goal is not to create a ‘better HTML’ (an additive process); it is to make it largely redundant (i.e. a subtractive one).

        Overthinking solutions is a pathological problem in programming, the classic mantra stating that there is no problem that cannot be solved by adding another layer of abstraction on top. But this isn’t creating a solution, it’s kludging a[nother] workaround: sprinkling a sweet-smelling perfume atop a great mountain of stinky stuff. And all the while racking up ever greater levels of complexity, fragility, technical debt. This is not a sound long-term strategy for any system, never mind one with such mind-bogglingly huge scalability, flexibility and durability demands as the World Wide Web.

        Regardless of any technical merits it may possess in itself, a proprietary embedded technology like Curl will almost certainly fail to achieve popular success for the same reasons that similar implementation strategies ultimately fail (e.g. Google Wave, Adobe Flash):

        1. Control freak tendencies: the vendor’s need to monetize their new platform means that neither it nor they play well with others, making it extremely difficult for the technology to achieve the critical mass necessary to produce self-sustaining growth.

        2. Inability to harness existing Network Effects to bootstrap the new technology, meaning it must build up its own Network Effect completely from scratch – a *far* more challenging and risky approach to achieving mass success.

        This is where dedicated programmers often fall down: their favorite task is writing lots of exciting new code so naturally try to solve every problem in this way – even when what’s actually needed is to *eliminate* such code (including the need for it in the first place) as in the case of the Web.

        The most you can sensibly achieve from creating such novel, but silo-based, projects as Curl or Wave is to use them as a rapid prototype or research testbed for innovatory ideas about how human-computer interaction could work in an ideal version of the Web. Once you learn how those ideas play out in practice, throw away the entire proprietary platform but remember those insights when thinking how best to tweak the current development trajectory of the mainstream Web to make it head in a more productive direction.

        The modern Internet and the Web’s foundational technologies (TCP, UDP, HTTP) were born of the desire to break down the barriers between competing proprietary systems for the overall benefit of all (see: Game Theory). But ever since Mosaic, vendors have been working – both by accident and intent – to re-proprietize it. For instance, the only ways to get an arbitrary document (a .doc or .odf or .pdf or .jpg or .c or whatever) shared online today are:

        1. squeeze it through the ubiquitous HTML straw into some proprietary web-based document management system, then tell other folks how to suck that document back out through the same HTML straw, or:

        2. shove it into one of the old anachronistic niche silo platforms like FTP, which is dumb as dirt and doesn’t integrate into with the primary HTTP infrastructure that drives the Web proper.

        And yet, if you dig into an old-timey web server like Apache, you can still find hints of the Web’s original interaction philosophy from before the time it took a left turn into the HTML EVERYWHERE cul-de-sac. Sure it’s crude as hell and doesn’t work properly and is rooted in a rather naive static filesystem vision of back-end document management, but there’s nothing that fundamentally stops Apache serving out any type of document you like.

        Instead of creating yet another flashy high-profile “Look-at-me!” proprietary AJAXY HTML-based web application that allows users to push individual files back and forth between the web-browserHTML-app straw, web developers should stop designing with their egos and instead go back to their humble roots in the unglamorous ‘low-level’ foundation technologies: the basic web server and client apps that underpin almost all web access today.

        Take a server-side application like Apache and build out first-class support for all the core HTTP verbs and content negotiation, such that any client that knows how to use those verbs and Accept headers as Fielding et al originally intended can push and pull any type of document in and out. Then build out fine-grained access control so that a document’s owner can decide exactly who is allowed to view it: author only, selected individuals, groups of friends or colleagues, anyone on the WWW, etc. (The SSH and crypto guys might have some ideas on how to make this work at the scale and efficiency required, and the HCI folks can work out how to make using it a slick no-brainer for ordinary users.) Add support for push notifications so that when someone publishes a document tagged with access rights for their friends and family, those individuals’ devices receive automated notification of the update, allowing them to inform their user or even just pull down the document/sync it with a local copy automatically. Or, heck, take a newer web server app such as Nginx that has less accumulated crud and do a clean build out of that. With the right long-term strategic development plan to guide it, such work could be done quite quickly and easily. And because this functionality is being effected at the bottom-most layer of the server-side platform, rather than within a bespoke ‘web app’ running atop a standard server app, it will quickly spread across the whole server-side space, ready to be turned on as soon as clients start asking for it.

        On the client-side, things are a bit trickier. Like I said above, trying to establish a completely new mass product/technology from scratch is a waste of time; you have to pony on existing products and technology, leveraging their existing ubiquity and network effects to bootstrap your own amended vision of Web-based HCI. This basically means getting buy-in from some or all of the major web browser vendors, many of whom have vested interests (ego, conservative, commercial, etc.) in preserving the clunky status quo which places their products front and center of *all* Web transactions. Persuading them to downgrade the status of their own products in the hope of someday achieving a far stronger Web that will eventually benefit all will be a tough sell. Still, if you can get past that hurdle, the next step would be to take a browser like, say, Firefox and split it into two essentially standalone applications: a faceless background application that provides services for interacting with the Web (HTTP, authentication, URL management, etc) and a frontend GUI application dedicated to displaying HTML documents and basically looking and feeling like the previous all-in-one ‘web browser’ app. Then, as with the server-side, straighten out the back-end app so that it knows how to use *all* the HTTP verbs (not just GET and POST) and do content negotiation correctly (something all popular browser currently do wrong).

        Once you’ve got all that HTTP client-side functionality running as a standalone service, *any* end-user application or application plugin can leverage *all* that considerable power. Which means, for example, an enterprising OS X developer might create a Finder plugin or FUSE application that allows users to interact directly with the web server to push and pull any document they like, tagged with whatever access permissions they choose. Similarly, word processors, image editors and other apps could use the same client HTTP services to push and pull documents themselves; e.g. click a URL in a Word document, and Word automatically negotiates with the server to retrieve the linked document in a format optimal to Word (.odoc/.docx/.rtf), without any of the hoop-jumping rigmarole of going through HTML Web Application or HTML Web Browser. Eventually, as OS vendors get on board and the level of integration improves, the OS’s own standard file services could provide seamless synching between local filesystem-based documents and online copies, plus versioning, transcoding, editing/annotating, hyperlinking and all that other tasty stuff. Instead of high-profile in-your-face Web Browsers and HTML Web Applications demanding all your attention and obedience, most basic web interactions would have no visible presence beyond the user indicating which documents should be visible outwith their private system, and to what extent.

        Finally, once you’re at that end-game stage, you’ll be in the position to start rationalizing a big chunk of today’s hodge-podge of Internet functions: making FTP totally unnecessary, rendering clunky old Email and Chat mechanisms redundant, and so on. So fixing the Web will ultimately reap rewards way beyond it alone.

        And *that* is how you fix the real problem at the heart of the Web today. Start now, and you could have all the enabling technology nailed down within five years, and a mass transition to the greatly improved interaction model in ten. Obviously you’ll disrupt a lot of big egos and a few business interests along the way, but anything less, like endlessly quibbling and fiddling over ‘HTML standards’, is really just rearranging deck chairs on the Titanic. Plus, fixing the basic Web interaction model by [re-]decoupling it from HTML technologies and HTML applications will ultimately benefit HTML, JS, CSS, etc. too. Once they’re unburdened of total responsibility for *all* web interactions, they’ll be able to evolve far more freely without the constant paralyzing fear of Breaking Everything.

        Once users are pushing and pulling their everyday documents without going through HTML-based interfaces, HTML and related technologies can refocus on what *they* are actually best at: providing SGML-style document representations, implementing portable client-side GUIs and associated front-end View logic that connect to server-based Model logic and data management systems to form Web-based distributed applications, and so on.

        TL;DR: “I like your Web. I do not like your current-day Web Applications/Browsers/Developers, they are so unlike your Web.”

        • has

          (Ugh, apologies for GYOB length. Had Disqus bothered itself to provide any Preview or Edit functions, I’d have noticed and reduced it to more digestible form. Have I mentioned already how much HTML-centric HCI sucks?:p)

          • Aankhen

            OT, but registered users can edit their posts. :-)

    • Luiz Felipe

      I always has think that the web server must serve data, what I do with it is my problem. Problem is that now we have a draconian state mandation that all code must be javascript, all “data” must be HTML, and all visualization must be enforced on you by some CSS.
      Because of this, I think the app model plus the web model (http) has now a chance to overcome this. I am trying to make web services that will just serve data. be it xml, or json, or just text. Then I make an app to parse and display the data, and put a way to navigate hyperlinks. If it comes from same document content, it stays on same app, or if it is html, it fires the old browser.
      The problem comes when you need to hyperlink a URL that is another app, then these “garden markets” comes on the way, instead of just being another hyperlink, the user must download the other app and install, and we are no better than on old operational systems.
      Perhaps the simplest solution is to bolt hyperlinking and downloading/installing directly on operational system, instead of having a marketplace when you go to download apps. You just go to a hyperlink on your operation system directly. Then it download small app from the hyperlink and run it, and if It invoke more hyperlink, more apps will be downloaded and run. And time by time, the old app cache can be excluded, ie, it auto-uninstalls.
      And as a form to make it secure, ssl can be used. But in the end, security is always responsibility of the user. You cant go to some places on your city and expect it to always be safe, you have to have sense of what you are doing or where you are going.

      • Jason Lane

        ” all ‘data’ must be HTML”.

        Really? Surely HTML is about structure in it’s purest form, not ‘data’ at all.

        • Luiz Felipe

          Today HTML is more like a presentation language than an data language.
          Sure HTML was text with hyperlinks then, but now it is has many things that is collateral to data, like the form it is presented on screen.
          I was talking about structure and data, more like XML.

  • http://www.facebook.com/kurt.cagle Kurt Cagle

    Simon,

    My single biggest argument with HTML5 (and with Ian, come to it) was that there was such a completely dogmatic insistence on not providing an extension mechanism for the specification. In 1998, when HTML 4 was standardized, the requirement for having a standardized set of terms was obvious – there was no clear mechanism for providing presentation+behavior that was sufficiently performant enough to be renderable in most browsers. XSLT stylesheets was a first step in that direction, but even there it was a meta-operation that felt kludgy at best, and generated static output but became hideously complex once behaviors were introduced.

    XBL was another interesting possibility, though in retrospect designing XBL fully as a JavaScript construct would have been a better alternative, but the uptake there was that there was a standard mechanism for behavior construction that could have been used and was largely NOT pursued because it allowed for the possibility of non-core HTML.

    To that end, I agree with your suggestion completely. Creating a standard polyfill mechanism, and moreover using this as an endorsement of the idea that alternative vocabularies are perfectly valid in HTML could go a long way towards creating a truly modern web application framework. However, I’ve been burned too many times by the belief that such a rational vision would happen, only to see the HTML purists stomp messily upon that vision. I’m not holding out a lot of hope for the web component framework – after nearly a decade of attempting to create the HTML5 “Best of All Possible Worlds” I do not see WHATWG members taking lightly to something that will reduce their authority.

  • http://twitter.com/LewisCowles1 Lewis Cowles

    No No No No No! If you want to use XML, then bloody well use it, if you want to code HTML then use that. You are arguing we should be going backwards to idiotic specifications. A computer is a computer, the more things you give it to do the less time each task can use the processor for. IF there was ever to be an addition to HTML that made sense it would be the division of content between the static and dynamic.

    • Wolowizard

      Agreed. People still arguing for XML, as a standard, in my opinion are idiots. Its heavy weighted and ugly. If you can’t understand what JSON { “hello” : “world” } is, or the format in which its used…you shouldn’t be a developer. Simple. You don’t understand programming objects – If you do understand it then stop being a Microsoft developer and move on. You don’t need any of those horid XML server side parses.

      I shall say again…If no one broke standards or morphed peoples interpretation of what that
      standard may be, you would have no creativity or innovative ideas. It’s
      all about breaking standards and making new ones.

      • http://twitter.com/LewisCowles1 Lewis Cowles

        1st +1 for wolowizard fantastic big bang reference.

        While I am kind of apathetic to JSON vs XML (as for me it’s Base 2 vs JSON vs XML, and I don’t think XML users have difficulty using JSON they just don’t like it…) I feel that the breaking standards bit was not quite right.

        If a standard is insufficient, create a new standard sure, try to incorporate others ideas and methodologies sure, but don’t expect to cry to developers if it is not supported by major browser vendors. Right now we have nearly everything we need in the browser in RAW form so really like an ingredients list, all we need is to level up the capabilities and speed with which things happen, and develop some compounds from our base elements, so that we can really push the envelope.

        Also it is important that we do not deviate from a standard because it is easier to do so, but because the existing standard has short-falls, is inconsistent or does not work (still waiting for a cross browser padding that behaves in 1 way)…

  • nitroxido

    According with the discussion, I must say that for a project I’m working on, I’ve developped a set of PHP libraries which allow the designer of a site to create custom markup and custom CSS effects. The programmer, then, must provide some “glue logic” in order to translate the markup and effects to actual CSS+HTML+JS. Examples are:

    to create a Facebook “like” button on the page

    to create a vertically stacked list of the elements inside the tag.

    This way, designers can create widgets and complex behaviours using familiar markup techniques. You can even use Microsoft’s XAML code if you also know how to provide the “glue logic” in HTML+CSS+JS.

    If and when the HTML is “liberated” as the author wants, I could take away the “glue logic” from the server and throw it into the browser, so my site will still work the same internally, save for that bit of leverage.

    This said, I’m completely on board with that point of view! Free Willy! er … Free HTML!

  • sathish

    ya i want that how to insert the values in the variable at table in sql plz explain me

  • fjpoblam

    Me? Uneducated. But speculation. Is this a great idea for developers but crap for clients? Does it involve existence or download (and thereby maintenance) of a greater amount of software to *interpret* more types of pages? And does it thereby involve slower page loads and greater exposure to malware? And how about cross-browser testing: doesn’t this become even *more* of a nightmare? Address these, and teach me, oh master.

    • Simon St.Laurent

      Each polyfill comes with some JavaScript and CSS overhead, yes. However, depending on what you want to do, it may not be that large or that complicated to test. Once built, it should be able to run across lots of documents without new implementation, and odds are good that the cost to download will be less than an extra image on your site.

      So yes, there’s a cost, but it seems tiny compared to most other web development styles.

  • Aaron

    Thank you for confirming what IT / Infrastructure architects have been joking about since 1998, that web designers are the lowest form on content producers, whether by code (programming) or artwork (graphics/sound/animation).

    We’re very glad you put your name on this article, it makes it easier to blacklist your kind of stupid.

  • christhekeele