The overhead of insecure infrastructure

If we don't demand more secure development infrastructure, we get to do it ourselves.

I’m experiencing a slow death by pollen this week, which has prompted me to ponder some of the larger issues of life. In particular, I was struck by the news that an FPGA chip widely used in military applications has an easily exploitable back door.

There is open discussion at the moment about whether this was a deliberate attempt by a certain foreign government (*cough* China *cough*) to gain access to sensitive data and possibly engage in Stuxnet-like mischief, or just normal carelessness on the part of chip designers who left a debugging path open and available. Either way, there’s a lot of hardware out there walking around with its fly down, so to speak.

As developers, we put a lot of time and effort into trying to block the acts of people with bad intent. At my day job, we have security “ninjas” on each team that take special training and devote a fair amount of their time to keeping up with the latest exploits and remediations. Web developers constantly have to guard against perils such as cross-site scripting and SQL injection hacks. Mobile developers need to make sure their remote endpoints are secure and provide appropriate authentication.

The thing is, we shouldn’t have to. The underlying platforms and infrastructures we develop on top of should take care of all of this, and leave us free to innovate and create the next insanely great thing. The fact that we have to spend so much of our time building fences rather than erecting skyscrapers is a sign of how badly this basic need has been left unmet.

So why is the development biome so under protected? I think there are several factors. The first is fragmentation. It’s easier to guard one big army base than 1,000 small ones. In the same way, the more languages, operating systems and packages that are in the wild, the more times you have to reinvent the wheel. Rather than focus on making a small number of them absolutely bulletproof (and applying constant vigilance to them), we jump on the flavor of the day, regardless of how much or little effort has been put into reducing the exposed security footprint of our new toy.

The fact that we have independent, massive efforts involved in securing the base operating systems for MacOS, Windows, Linux, BSD, etc, is nothing short of a crime against the development community. Pretty it up any way that suits you with a user interface, but there should (at this point in the lifecycle of operating systems) only be a single, rock-solid operating system that the whole world uses. It is only because of greed, pettiness, and bickering that we have multiple, fragile operating systems, all forgetting to lock their car before they go out to dinner.

Languages are a bit more complex, because there is a genuine need for different languages to match different styles of development and application needs. But, again, the language space is polluted with far too many “me-too” wannabes that distract from the goal of making the developer’s security workload as low as possible. The next time you hear about a site that gets pwned by a buffer overrun exploit, don’t think “stupid developers!”, think “stupid industry!” Any language that allows a developer to leave themselves vulnerable to that kind of attack is a bad language, period!

The other major factor in why things are so bad is that we don’t care, evidently. If developers refused to develop on operating systems or languages that didn’t supply unattackable foundations, companies such as Apple and Microsoft (and communities such as the Linux kernel devs) would get the message in short order. Instead, we head out to conferences like WWDC eager for the latest bells and whistles, but nary a moment will be spent to think about how the security of the OS could be improved.

Personally, I’m tired of wasting time playing mall security guard, rather than Great Artist. In a world where we had made security a must-have in the infrastructure we build on, rather than in the code we develop, think of how much more amazing code could have been written. Instead, we spend endless time in code reviews, following best practices, and otherwise cleaning up after our security-challenged operating systems, languages and platform. Last weekend, we honored (at least in the U.S.) those who have given their life to physically secure our country. Maybe it’s time to demand that those who secure our network and computing infrastructures do as good a job …

OSCON 2012 — Join the world’s open source pioneers, builders, and innovators July 16-20 in Portland, Oregon. Learn about open development, challenge your assumptions, and fire up your brain.

Save 20% on registration with the code RADAR20

Related:

Related

Sign up for the O'Reilly Programming Newsletter to get weekly insight from industry insiders.
  • http://www.beallfp.com Jim

    You have fallen for the nirvana fallacy.

    from wikipedia:
    The nirvana fallacy is the logical error of comparing actual things with unrealistic, idealized alternatives. It can also refer to the tendency to assume that there is a perfect solution to a particular problem. A closely related concept is the perfect solution fallacy.

    if you have one OS if it is compromised you are vulnerable everywhere.

  • Ran Biron

    Respectfully (most respectfully) (well, kind of), I don’t buy this. If this was correct, why haven’t the big 5 languages been protected like hell? Where’s this uber-protection for C/++, Java (and JVM based languages) C# (and .Net based languages), and JavaScript (*)?
    In my opinion, we just have to accept the fact of an increasingly complex world.
    Good performance doesn’t rely on hardware advances anymore – it requires more complex techniques.
    Good survivability isn’t just a fast restart and a backup scheme anymore – it’s multiple regions, true high availability and dynamic control over the VMs.
    The same goes for security – it’s just more complex than it used to be. The same way you have to think when querying the database (do I have an index? what would be the added load?), you now have to think when processing input (is this potentially tainted? how do I sanitize it? how do I avoid exposing internal details as potential attack vectors?).

    (*) – I know those are not the current tiobe index top-5.

  • http://www.boogdesign.com/b2evo/ Rob Crowther

    I’m amused that you think that there’s a “genuine need for different languages to match different styles of development and application needs” but we need only one operating system. Why don’t operating systems need to match different development styles and application needs? Why would a washing machine need the same OS that’s in my phone?

    Also the pervasive nature of security issues belies this dream you have of handling it all at a lower level. There are always going to be leaky abstractions and trade-offs. For instance your statement that “It’s easier to guard one big army base than 1,000 small ones” has already established a security position. It’s easier to attack one big army base than it is 1000 small ones.

  • http://mikewarot.blogspot.com Mike Warot

    The application shouldn’t need to secure anything. The OS should be doing the job… but the way the job is done is wrong.

    Instead of trying to limit the side effects of programs by locking things down as issues are found, why not lock EVERYTHING down, and default it to no access?

    Then you could give a process a list of things it is allowed to do (a set of capabilities). Each process would then only get to operate inside those restrictions.. like a universal sandbox.

    Instead of trying to limit side-effects ad-hoc, you would get to decide the limits of side effects before you even run a process.

    The time for capability based security to start making its way into the mindset of developers is finally here… I look forward to getting it made into reality.

  • Joseph Lust

    If you use the proper frameworks, you do get most of these protections. I’m a GWT fan and XSS, malicious code/sql injections, CSRF, and many more items are all checked for if I just use the framework.

    In other frameworks like Ruby etc all there are many modules for carrying out the proper checks too.

    But… unless you want an uber restrictive environment, and want to pay Lockheed a $30K for a laptop, there will always be vulnerabilities in the large stack of software (OSS or not) that makes up your applications.

    Someone will find and issue in your SSH, SSL, or other dependency and you’ll be vulnerable. Or… hell, even a vulnerability in the ASIC running your code will be found. If you really need to be secure, as is the policy of the CIA, don’t use a computer.

    • Claudio

      It’s ridiculous to think that an operating system or language can protect your application from unauthorized use of the application.

      There are concepts that are unique to the application itself, and will have to be secured by design, rather than magic pixie dust.

      An operating system knows nothing of HTTP authentication. How can it protect you from vulnerable authentication schemes? How can it prevent request replay? Request forgery? That’s all specific to the HTTP protocol.

      Same with buffer overflows. Like it or not, the operating system knows nothing of buffers. It knows about page mapping, and it protects you pretty well from unauthorized access to another process’ memory. Except in the ways allowed by design. Allowed because they’re necessary.

      The state of insecurity is just a result of people’s inability to accurately gauge the complete repercussions of design decisions at any layer – the ripple effect in a complex system is huge, and outside the reach of a single person’s comprehension.

      Some kinds of vulnerabilities can be prevented at the OS level, some others at the language level, and some others at the framework level. Some only at the application level, and some can’t at any level, as is the case of 0-day exploits.

      Noone can protect from an unknown attach vector, unless the behavior of the whole system is perfectly understood – but that’s impossible, impossible to humans because of the immense complexity, and impossible to software because of the halting theorem.

  • steve

    Thanks for an insightful article. However, I think you have missed two factors which are the real roots of the problem.

    First, as long as software is copyrightable and subject to lock-in tactics, vendors will value other things over security if it will make an extra buck, and will systematically obstruct the big convergence.

    Second, code actually is getting better, but there will always be the people factor. You have to have the freedom to write bad code in order to be able to write good code. And most of the Windows pandemic is due to user error.

  • John D.

    James, Microsoft owns the OS space in many business applications and doesn’t seriously compete with anyone. But even in this monopolistic environment, little attention has historically been paid to security because added features have always been more important (or viewed as more important) to customers. It’s sad, but it’s often better to be viewed as feature-rich than more secure. And from a development standpoint, it’s hard to devote a lot of time and money into security because it often doesn’t add anything to the bottom line. I constantly have to defend any expenditure into firewalls, IDS, etc. because they are viewed as expenses and don’t generate revenue. Secure development is seen in the same way.

  • http://bloggeek.me Tsahi Levent-Levi

    James,

    Such an approach would only get you some of the way required. There are a lot of aspects to security that needs to be taken care. Some of these can be delegated to the operating system and the language, but I fear that a lot is still up to application developers to deal with.

    The simplest examples here are apps that hog your data for future use, or for advertising… or apps that just don’t implement basic login mechanisms properly. I’ve written about it from a different angle here, stating that users don’t need to care about security but developers must: http://bloggeek.me/users-security/

  • David Collier-Brown

    “If we prevent you from doing something stupid, we might also prevent you from doing something brilliant.” This was in the v6 era, so they were thinking of such things as the “provably secure operating system” project, PSOS, as well a B3 and A1 proposed B3 and A1 Orange Book systems.

    These days, there are some areas where we can improve the default security, but still debate about where the boundaries lie.

    –dave

  • Cellar

    Some interesting stuff here, but not all of it holds water. Take, for example, that most widely used desktop environment emulator. It’s so widely used, it’s practically a monoculture. Yet it achieved that position despite lacking all sorts of safeguards. In fact its desktop IPC model is outright unsecurable.

    So if that is to be, as it practically has been for quite a while, our single big base, we’d be shit pot outta luck. Oh hey, we are.

    Languages? Oh, I recall a certain also-ran run-time environment that has ways to subvert it built right in. You’re not supposed to go there, but you can. Despite its salespitch that “managed code” and whatnots are so much better for “enterprise” environments that are apparently filled with fungibly mediocre “pro-grammers”. If you insist you can still break it, no sweat.

    That other big run-time environment is supposed to be rock solid, but of course
    it isn’t. Humans made it after all. There, though, the damage is largely limited to underlying platform dependent specifics of a certain implementation, limiting the damage.

    So there, too, the argument isn’t that less diversity will make us safer. Want an analogy from biology? Monoculture and its inherent problem that if there’s a problem, absolutely everyone will have it, is why opposites attract.

    While I can see where you’d rather be ze big arteeste, programming just isn’t like that. Pouring thought into repeatable action requires discipline and if we fsck up, well, we’ll just have to fix it, won’t we? Cryptography is the essence of this. The algorithms are interesting, but it’s the many potholes on every level that make it so hard to do right.

    Of course, our general approach to security isn’t very solid either. It’s about filling the holes in a notional swiss cheese with patchy easy cheez; there’s an entire cottage industry to doing exactly that, handily not fixing the underlying problem, ensuring them bread on the table for years to come.

    As an opinion piece, you need to think this one over a bit more. All you’re doing now is throwing a temper tantrum and wishing the world’s problems would go away. Very artist-y, I’m sure, but not very constructive. Making things go is about engineering, not about pleasing the art critics.

    Even that holiday reference is quitessentially provincial, in the sense that you handwave over that indeed it’s a national day remembering your own dead and as such not applicable to any other of the 200-odd countries in the world. It’s quite embarrasing.

    So much that it earns you this: Get yer head outta yer arse, you redneck yankee, you.

  • Marc Espie

    Oh yeah. “it’s not our fault, really”.

    The simple fact that you have security ninja is so much a flawed process ! keep track of exploits and remediations, really ? Why don’t your core developers know how to write correct (hence secure) code in the first place ?

    Maybe because you don’t pay them enough/don’t give them enough money/don’t leave them enough time to write things right the first time…

    Oh, and hey, I work for OpenBSD. Yes, there are blatant issues with OpenBSD, the most prominent being X windows and its drivers. It’s because no-one gives a fuck: ATI, nvidia, Intel gfx cards, closed drivers, more performance, no stability, just because the average user doesn’t care.

    You want to change that ? donate to our project, so that we get more TIME and HARDWARE to work on.

    But really, under-protected ? start by worrying that YOUR SOFTWARE deals with errors properly instead of randomly trashing around, and then don’t come blaming the OS when things are YOUR FAULT in the first place. You don’t like insecure piece of shits like php or windows ? STOP CODING FOR THEM !

  • http://www.boogdesign.com/b2evo/ Rob Crowther

    Instead of trying to limit the side effects of programs by locking things down as issues are found, why not lock EVERYTHING down, and default it to no access?

    Because then you end up with an unusable system which pops up 50 request dialogues before letting the user do anything useful. You’ve succeeded in shifted the onus of managing security from the programmer, who at least should have enough technical knowledge to understand the issues, to the user, who we all know won’t on average. What most users will end up doing is allowing everything indiscriminately just so they don’t get annoying dialogues any more.

  • http://therandshow.blogspot.com John Thomas

    Without necessarily buying into it, I find Rob Crowther’s position interesting when you look at the fact that in the embedded world, where user contact is mostly-to-completely indirect, secure OS’ (or no OS) are real concerns and goals. There you see security-designed OS’, like VxWorks and Greenhills Integrity, have a big edge.

    Perhaps it goes to flexibility, if the OS developers know how their users will behave they can optimize security to that end, however, for a OS to be truly general purpose it usually needs to open up some security holes. Maybe we would benefit though from some specialty OS’ oriented toward security in their particular industries and niches.

  • http://www.notebooknerds.com Darren

    Brilliant article, and I completely agree with you James; any developer that leads themselves open to that kind of attack deserves the criticism and every negative comment or remark made about them, it’s as simple as that.

    These people are paid a lot of money to ‘secure’ our data, and most importantly our countries data. It’s simple basics, from a development standpoint you must make sure you invest in looking for holes and flaws, rather than paying for you and the companies trips to WWDC to drink coffee and swap numbers with people to play golf with… All I am saying, if the *cough* Chinese were smart enough to find it, why didn’t we in the first place? Makes me sad…

  • https://play.google.com/store/apps/details?id=orbitlivewallpaper.puzzleduck.com PuZZleDucK

    “Either way, there’s a lot of hardware out there walking around with its fly down, so to speak” … right here at the start you point out that the hardware is flawed, then you go on to rant about software?!?! Security in an insecure environments what this blog should have been about.

    Also I concur with Marc, if you don’t like a systems security model “STOP CODING FOR THEM”, OpenBSD could use the developers ;)

  • Wiki Fast

    If you want a secure OS, I suggest patching the linux kernel with the one from grsecurity.net. They tell you all the commands you need to use on a Ubuntu based system to download, configure, compile, and install it. I got it up and I’m not much of a programmer, so you could too! During configuration, note all the setting that are built-in with the High security level for both grsec and pax, then add them back under the Custom setting to disable position independent executables to be able to run xorg graphics server and install paxctl from there as well in order to make some application specific commands to permit a web browser. All the instructions are in the Documentation link from the sidebar of the site. Compiling the kernel takes hours. If you want to check the gpg of the linux kernel just note that the key has been updated since the last time documentation was, as it says it might. The Documentation’s link to the kernel’s signature page led me to the command $gpg –recv-keys 6092693E and I found out that, that example’s key ID was in fact the same one I received from the hard way of downloading the certificate. You will also need to look up a couple command equivalents (e.g. unpacking) between .tar, .tar.bz2, and .tar.xz file formats because of the differences in instructions of the kernel and grsecurity.

    For security at the language level, I suggest the wikipedia article on Comparison of Programming Languages and choose one that is marked as a “systems” language in Intended Use column of General Comparison, in order to be able to program an OS with it. Then narrow the list you’ve made from that with the Strong, Type Strength and Safe, Type Safety markings in the Type Systems section of the article. Finally moving on to the Failsafe and I/O System Calls section to narrow the list once again to those marked Yes. You’ll end up with list of Gosu (easy transition for Java developers), Rust (Mozilla’s project), and Ada. I brought up this list to one of the developers of an OS in Java because Java is not a systems language and he said Ada should not be considered for the project he is a part of because it is not easy to develop with. There are also two languages that should be investigated further that have all the steps successful except for the last one that was due to unavailable information on the Failsafe aspect in the wikipedia entry. Those languages are Oberon and Plus. Oberon already has, possibly legacy, an OS called Bluebottle (a.k.a. A2). Plus had Michigan Terminal System as on OS also.

  • http://radar.oreilly.com James Turner

    Jim, if you look at actual industry figures, 3 or so operating systems pretty much own the market, and the same holds true for languages. They spend most of their time competing with each other on feature sets, and tend to neglect making themselves secure.

    I don’t believe that you can create a totally invulnerable operating system, because there will always be idiots out there that misconfigure them, etc, but if we (as an example) all agreed to pick a POSIX-variant OS and hammer all the security holes we can find into oblivion, then build the GUIs of our choice on top of that, we’d go a long way toward preventing exploits.

    James