Using Iframes to Address Third-Party Script Issues and Boost Performance

SOASTA chief architect Philip Tellis talks about ways developers and third-party script authors can use iframes.

In the following interview, Philip Tellis, chief architect at SOASTA, talks about how iframes can be used to address performance and security issues with third-party scripts, and how the element can help third-party script owners make use of far-future expires headers. Tellis will address these issues in-depth in his upcoming Velocity session, “Improving 3rd Party Script Performance With IFrames.”

How can iframes be used to boost performance?

PhilipTellis

Philip Tellis

Philip Tellis: Iframes haven’t traditionally been good for performance. Sub-pages loaded in iframes still block the loading of the main page. Too many iframes hurt performance in the same way as too many images or scripts do. The problem is slightly worse with iframes because each page loaded in an iframe may load its own resources, each of which competes with the main page for available bandwidth.

All of this assumes that (1) the iframe is loaded within the page before the onload event fires, and (2) its src attribute points to a resource that needs to be downloaded. If we prevent either of these two conditions from happening, an iframe doesn’t have a performance penalty. We can then take advantage of the fact that the iframe is a complete browser window instance, and you can run pretty much anything you want in there without affecting the main page. This is great if you need to download and cache resources like JavaScript, images and CSS without blocking the page’s onload event or force a cache reload.

The three ways to reduce perceived latency in any system are to cache, parallelise, and predict, and iframes allow us to do all three without impacting the main page.

What are some of the major issues with using third-party scripts, and how are they best addressed?

Philip Tellis: Third-party scripts have three major issues:

  1. Performance issues, caused by the added latency from an extra DNS lookup, new TCP connection, and the roundtrip to a third-party server that’s outside your control.
  2. Security issues, since a third-party script loaded onto your page has full control over everything on that page, and all user interaction with that page. I’ve written about this in the past.
  3. Total site failure, since this third party is now a single point of failure. Patrick Meenan has a great article about this. This is really a special case of the performance issue, but it’s big enough to make site owners sit up and listen.

The security issues are dealt with by loading the scripts into an iframe that runs on a different domain. This has the downside of not allowing the script to access anything on the page, so it won’t work for scripts that need to interact with the page. There are ways around this using intelligent API design based on postMessage, but that’s beyond the scope of a performance conference.

There are a few ways around the performance issues. We can load our third-party scripts asynchronously after onload. This typically involves creating a script node using JavaScript in a function that’s called from the onload event handler. Resources added to the page after onload has fired won’t block onload (since it’s already fired). This won’t work for scripts that need to be on the page before the onload event has fired, or as soon as possible. For example performance analytics libraries like boomerang need to measure the time when onload and other events on the page fire, so loading it up after onload won’t work for it.

The method we use at SOASTA, is based on ideas that have been refined over the years by smart hackers like Stoyan Stefanov and the devs at Meebo and Olark. Each of them have implementations that work well. I’ve built on a lot of their ideas, and added a few of my own to come up with a simple script loader that loads up third-party JavaScript without blocking the onload event while still allowing the script to load up as early as possible on the page. This technique uses an iframe without an src attribute (which takes care of the iframe requirement (2) above).

What about issues with far-future expires headers?

Philip Tellis: Far-future expires headers are one of the best ways to ensure that your in-page resources are cached by the browser and intermediate proxies for a long time. They tell all caches that this resource will not change for a very long time, so always use your local cached copy. This is great for reducing latency. The only thing better is not loading the resource at all.

Whenever your resource does change, you simply change the URL of the resource and the browser, and all intermediaries will download and cache the new resource. The old resource stays in cache until it is evicted by an LRU or similar algorithm.

This, however, creates a bit of a problem for third-party script owners. A third-party script owner like the Google Analytics team, the Facebook Like button team, or the boomerang team at SOASTA does not have control over the thousands of sites that include the script on their pages, so changing the URL has no effect until all those pages are updated, and that might never happen. The end result is that most third-party scripts are cached for a very short amount of time. For example, the Google Analytics JavaScript is only cached for 12 hours.

A better method is to somehow suggest to the browser and intermediaries that they need to refetch the resource from the origin server even if they have a cached copy that hasn’t expired yet.

Iframes help us out here as well. In this case, we do need an src attribute on the iframe, but we load it up after the onload event fires, which takes care of the iframe requirement (1) above. The page loaded within the iframe has JavaScript that tells the browser to ignore its cache and refetch all resources within the iframe from the next upstream proxy or server. It also includes headers that tell upstream proxies to ignore their cache. This helps refresh the cache at all levels—assuming all intermediate proxies are well behaved. Unfortunately, we’ve come across a few badly behaved proxy servers, so even a year after running this code, we’ll find about 0.1% of performance beacons coming in from an old obsolete version of boomerang.

Which is the best solution, and what kinds of developers will benefit from them most?

Philip Tellis: With most things related to software, there is no one best solution. All solutions make compromises that a site developer needs to take into consideration before picking one. Since I am a third-party script author, the solution that works best for me and my customers is to have my script loaded in an asynchronous, non-blocking fashion.

For most third-party widgets that do not need to interact with the site before it has loaded, loading these scripts asynchronously after onload is a good compromise that’s much simpler to implement with less code.

For a security-minded site, say something that deals with financial or medical records, and is always over SSL, the solution is really to never load anything from a server that you don’t control.

What superpower will people have after attending your talk?

Philip Tellis: Third-party script authors will learn how to build their scripts in a way that allows for far-future expires headers, and that can be loaded onto customer sites without opening them up to a SPOF.

Website owners will learn to identify which compromise to make with the different third-party scripts they reference, and if a third-party script owner doesn’t provide a solution that works, they can point them in the right direction.

Furthermore, I’m hoping to get a lot of good feedback from attendees who have to deal with third-party scripts like the one I write.

This interview has been edited and condensed.

tags: ,