ENTRIES TAGGED "http"
On both front and back end, the Web challenges conventional wisdom
The Web is different, and I can see why programmers might have little tolerance for the paths it chose, but this time the programmers are wrong. It’s not that the Web is perfect – it certainly has glitches. It’s not that success means something is better. Many terrible things have found broad audiences, and there are infinite levels to the Worse is Better conversations. And of course, the Web doesn’t solve every programming need. Many problems just don’t fit, and that’s fine.
So why is the Web better?
OSCON 2013 Speaker Series
Caching is the method that most improves response time in web applications (as Steve Souders shows in Cache is King), but in order to make use of it, every layer of your application must be configured for that purpose.
Most applications are initially developed with little or no use of caching and then must be refactored to fulfill performance goals. However, this approach incurs extra development costs that could be saved if response time is taken into consideration in the early stages of the development process.
The methodology that can save your life while you are still developing your application is pretty straightforward: keep caching in mind whenever handling data in your system. Either web APIs or internal backend data flows need to ask one simple question:
Can I survive if the data seen by the user is not the latest?
Sometimes the answer to this question is ‘no.’ For example, I would be fired very quickly if I built a bank system that showed more money than one consumer’s account really has. On the other hand, if the system interacts with general data services like social networks, news, weather, car traffic, etc., there is less need to ensure the latest piece of information is immediately shown to the user.
Of course, the latest data needs to eventually get to the user. Data cannot be too old or you risk confusing the user, but configuring a short expiration time (let’s say 5-10 minutes or less) for dynamic data that can support it can significantly improve the response time experience. That is called temporal consistency and it is crucial for having a successful caching strategy in place.
Nowadays, web applications are based on mashing up several web services coming from different sources. The best way to tackle different response times as well as data designs is to temporally cache those elements across all system layers. It is also applicable to data coming from your own system if the information needs to travel from one part of the world to another in several hops. If information is not critical, consider caching it at any intermediate stage and reuse when it is needed. Caching in the backend can avoid half of a trip. Even better would be to cache at the target device or a CDN system that can dispose of the full data trip or reduce it to only the last mile as an easy way to enhance performance.
Velocity 2013 Speaker Series
Be honest, have you ever wanted to play Steve Souders for a day and pull some revealing stats or trends about some web sites of your choice? Or maybe dig around the HTTP archive? You can do that and more by setting up your own HTTP Archive.
httparchive.org is a fantastic tool to track, monitor, and review how the web is built. You can dig into trends around page size, page load time, content delivery network (CDN) usage, distribution of different mimetypes, and many other stats. With the integration of WebPagetest, it’s a great tool for synthetic testing as well.
You can download an HTTP Archive MySQL dump (warning: it’s quite large) and the source code from the download page and dissect a snapshot of the data yourself. Once you’ve set up the database, you can easily query anything you want.
You need MySQL, PHP, and your own webserver running. As I mentioned above, HTTP Archive relies on WebPagetest—if you choose to run your own private instance of WebPagetest, you won’t have to request an API key. I decided to ask Patrick Meenan for an API key with limited query access. That was sufficient for me at the time. If I ever wanted to use more than 200 page loads per day, I would probably want to set up a private instance of WebPagetest.
To find more details on how to set up an HTTP Archive instance yourself and any further advice, please check out my blog post.
Going back to the scenario I described above: the real motivation is that often you don’t want to throw your website(s) in a pile of other websites (e.g. not related to your business) to compare or define trends. Our digital property at the Canadian Broadcasting Corporation’s (CBC) spans over dozens of URLs that have different purposes and audiences. For example, CBC Radio covers most of the Canadian radio landscape, CBC News offers the latest breaking news, CBC Hockey Night in Canada offers great insights on anything related to hockey, and CBC Video is the home for any video available on CBC. It’s valuable for us to not only compare cbc.ca to the top 100K Alexa sites but also to verify stats and data against our own pool of web sites.
In this case, we want to use a set of predefined URLs that we can collect HTTP Archive stats for. Hence a private instance can come in handy—we can run tests every day, or every week, or just every month to gather information about the performance of the sites we’ve selected. From there, it’s easy to not only compare trends from httparchive.org to our own instance as a performance baseline, but also have a great amount of data in our local database to run queries against and to do proper performance monitoring and investigation.
The beautiful thing about having your own instance is that you can be your own master of data visualization: you can now create more charts in addition to the ones that came out of the box with the default HTTP Archive setup. And if you don’t like Google chart tools, you may even want to check out D3.js or Highcharts instead.
The image below shows all mime types used by CBC web properties that are captured in our HTTP archive database, using D3.js bubble charts for visualization.
Spin up Python-friendly services with 0 lines of code
Twisted is a framework for writing, testing, and deploying event-driven clients and servers in Python. In my previous Twisted blog post, we explored an architectural overview of Twisted and examples of simple TCP, UDP, SSL, and HTTP echo servers.
While Twisted makes it easy to build servers in just a few lines of Python, you can actually use Twisted to spin up servers with 0 lines of code!
We can accomplish this with
twistd (pronounced twist-dee), a command line utility that ships with Twisted for deploying your Twisted applications. In addition to providing a standardized deployment interface for common production features like daemonization, logging, and authentication,
twistd can use Twisted’s plugin architecture to run flexible servers for a variety of protocols. Here are some examples:
twistd web --port 8000 --path .
Run an HTTP server on port 8000, serving both static and dynamic content out of
the current working directory. Visit
http://localhost:8000 to see the directory listing:
The Web Can Teach the Enterprise
I enjoyed talking with Mike Amundsen, Principal API Architect at Layer 7 Technologies, who has spent much of his recent career encouraging enterprise customers to shift toward web architectures. While REST has emerged over the past decade to eclipse SOAP-based “web services”, Amundsen has eagerly promoted the next step beyond the simple CRUD-based model of early REST work: hypermedia.
Our conversation ranged from the history and foundations of REST through the many ways to integrate that work with existing enterprise practice to a glimpse at what the future might hold for frameworks, design, and architecture.
- REST as enterprise architectures principles applied to hypermedia (at 1:57)
- Transitioning from RPC-based models to hypermedia, by including additional information in response. (at 3:00)
- The value of opinionated message formats and eventual integration into opinionated frameworks. (at 5:51)
- Shifting from shared understandings of object models to messages. (at 8:50)
- “Enough coupling, but not too much” to allow mixing of technologies. (at 11:15)
- Human negotiation, HTTP negotiation, and responsive web design (at 14:20)
Structuring client-server communications with hypermedia messages
Messages on the Web carry three levels of information: Structure Semantics, Protocol Semantics, and Application Semantics. No matter the implementation style, all three of these are needed for any successful communication between client and server. This threesome (S-P-A) forms the essentials of communication over distributed networks.
Most of the time, though, these levels are obscured or muddled at implementation time. For example, both Protocol Semantics (how we create valid network requests) and Application Semantics (domain names like users, customers, orders, etc) are often mixed together in conversation ("You POST new users to this URL") and both of these are usually only defined in human-readable documentation and implemented in the source code of the client application itself. In other words, the protocol-level and application-level semantics are tightly coupled. An easy way to discover this is to see if you can take the same message format and implement your API using a protocol other than HTTP (e.g. WebSockets or FTP). I illustrated this "protocol-agnostic" design pattern back in 2010 ("A RESTful Hypermedia API in Three Easy Steps").
But there is a way to keep these separate from each other and view each of these aspects in their own light. In doing so, you’ll strengthen the quality and value of your message design while increasing flexibility and choices.
Structure Semantics provides the set of rules regarding how to create a well-formed message. XML has rather simple structure semantics. JSON's rules for well-formedness are a bit more vague but reachable since JSON.parse(…) turns out to be the ultimate arbiter of such things. Determining well-formedness of other, more complex media types (HTML, Atom, HAL, Collection+JSON) is tougher, but do-able even if external validators are not always available.
Building a successful web implementation that does not contain structure semantics is difficult—and that's a good thing.
Learn to build event-driven client and server applications
I want to build a web server, a mail server, a BitTorrent client, a DNS server, or an IRC bot—clients and servers for a custom protocol in Python. And I want them to be cross-platform, RFC-compliant, testable, and deployable in a standardized fashion. What library should I use?
Twisted is a “batteries included” networking engine for writing, testing, and deploying event-driven clients and servers in Python. It comes with off-the-shelf support for popular networking protocols like HTTP, IMAP, IRC, SMTP, POP3, IMAP, DNS, FTP, and more.
To see just how easy it is to write networking services using Twisted, let’s run and discuss a simple Twisted TCP echo server:
from twisted.internet import protocol, reactor
def dataReceived(self, data):
def buildProtocol(self, addr):
With Twisted installed, if we save this code to echoserver.py and run it with python echoserver.py, clients can now connect to the service on port 8000, send it data, and get back their echoed results. Read more…
Why the ASP.NET Web API Framework is an essential tool for RESTful applications.
Glenn Block (@gblock) is an O’Reilly author and senior program manager on the Windows Azure Team at Microsoft.
We sat down recently to talk about the newly released ASP.NET Web API Framework, which he helped develop, and why it will become essential to building RESTful applications.
Key points from the full video (below) interview include:
- ASP.NET Web API enables a rich set of clients to consume info [Discussed at the 1:47 mark]
- Find out if one comes out on top – MVC vs. Web API [Discussed at the 2:41 mark]
- Different clients negotiate content differently – Web API handles this with ease [Discussed at the 5:50 mark]
- Self hosting is a big deal but beyond that Web API introduces flexibility – you no longer need to use IIS [Discussed at the 9:04 mark]
- An HTTP Programming Model for Microsoft [Discussed at the 11:04 mark]
- The newest of the new – Hypermedia, OData, and Web API Contrib [Discussed at the 18:08 mark]
You can view the entire interview in the following video.