ENTRIES TAGGED "security"
When will Adobe disclose the full extent of its breach to users?
Over the last week, the analysis of the Adobe breach has gotten more interesting.
The actual file itself has been available via BitTorrent. I found a torrent file and looked through it myself. If you’re interested, note that the torrent gets you a 4+GB zip of the actual 10GB of text.
Paul Ducklin at Sophos has published a very good analysis of the contents of that file. The summary is that each record has an account number, an account name, an email address, the encrypted password, and the person’s password hint.
The need to root out old data goes well beyond creating disk space
A couple weeks ago Brian Krebs announced that Adobe had a serious breach, of customer data as well as source code for a number of its software products. Nicole Perlroth of The New York Times updated that to say that the breach appears to be much bigger than thought and, indeed, Krebs agrees. Adobe themselves announced it first, earlier than Krebs’s first report in CSO Brad Arkin’s terse blog post, Illegal Access to Adobe Source Code.
By now, breaches are hardly news at all. All of us pros flat out say that it isn’t a matter of *if* you get hacked, but *when*. Adobe’s is of note solely because of the way that the news has dribbled out. First, the “illegal access” to source code, then the news of lost customer data to the tune of 2.9 million, then upping that to 38 million, but really actually (maybe?) 150 million. The larger number is expired accounts—or something.
Not just paying attention, but starting over
Security has to reboot. What has passed for strong security until now is going to be considered only casual security going forward. As I put it last week, the damage that has become visible over the past few months means that “we need to start planning for a computing world with minimal trust.”
So what are our options? I’m not sure if this ordering goes precisely from worst to best, but today this order seems sensible.
Stay the Course
This situation may not be that bad, right?
There's no such thing as perfect security, and we should stop sacrificing the good for the perfect.
I’ve had a day or two to play with my new iPhone 5s, and the fingerprint scanner is one of the nicer things about it. I like the added security of being able to unlock it with my fingerprint, because I was one of those people who could never be bothered to have a passcode on it before.
Of course, the news of the day is that some inventive folks in Germany have managed to unlock one of the phones by lifting a print from the glass of the display and using a variety of fairly low tech steps to create a false thumbprint from it. This should come as no surprise to anyone who understands how fingerprint sensors work. The 5s does more than some to prevent spoofing, but pretty much no fingerprint scanner is impervious to a determined attack.
What is sad to see is the conclusion that these hackers (and the press) have drawn. “Fingerprints aren’t a good method of securing data. You should never use something that you can’t change as a password. Always practice two-factor security policies.” Of course, if someone really wants to break into your phone, and is willing to expend the effort to do it, they can. If someone wants to break into a typical house, they can. If someone wants to steal your car, they can.
Is protecting open processes possible?
I was somewhat surprised, despite my paranoia, by the extent of NSA data collection. I was very surprised, though, to find the New York Times reporting that NSA seems to have eased its data collection challenge by weakening security standards generally:
Simultaneously, the N.S.A. has been deliberately weakening the international encryption standards adopted by developers. One goal in the agency’s 2013 budget request was to “influence policies, standards and specifications for commercial public key technologies,” the most common encryption method.
Cryptographers have long suspected that the agency planted vulnerabilities in a standard adopted in 2006 by the National Institute of Standards and Technology and later by the International Organization for Standardization, which has 163 countries as members.
Classified N.S.A. memos appear to confirm that the fatal weakness, discovered by two Microsoft cryptographers in 2007, was engineered by the agency. The N.S.A. wrote the standard and aggressively pushed it on the international group, privately calling the effort “a challenge in finesse.”
The Guardian tells a similar story. It’s not just commercial software, where the path seemed direct, but open standards and software where it seems like it should have been harder.
I was very happy to wake up to a piece from the IETF emphasizing their commitment to strengthening security. There’s one problem, though, in its claim that:
IETF participants want to build secure and deployable systems for all Internet users
Last week’s revelations make it sadly clear that not all IETF participants are excited about creating genuinely secure systems.
For the most part, people use the Tesla REST API via the iPhone and Android mobile apps. The apps enable you to do any of the following:
- Check on the state of battery charge
- Muck with the climate control
- Muck with the panoramic sunroof
- Identify where the hell your car is and what it’s doing
- Honk the horn
- Open the charge port
- Change a variety of car configuration settings
- More stuff of a similar nature
For the purposes of this article, it’s important to note that there’s nothing in the API that (can? should?) result in an accident if someone malicious were to gain access. Having said that, there is enough here to do some economic damage both in terms of excess electrical usage and forcing excess wear on batteries.
The NSA Can't Replace 90% of Its System Administrators
In the aftermath of Edward Snowden’s revelations about NSA’s domestic surveillance activities, the NSA has recently announced that they plan to get rid of 90% of their system administrators via software automation in order to “improve security.” So far, I’ve mostly seen this piece of news reported and commented on straightforwardly. But it simply doesn’t add up. Either the NSA has a monumental (yet not necessarily surprising) level of bureaucratic bloat that they could feasibly cut that amount of staff regardless of automation, or they are simply going to be less effective once they’ve reduced their staff. I talked with a few people who are intimately familiar with the kind of software that would typically be used for automation of traditional sysadmin tasks (Puppet and Chef). Typically, their products are used to allow an existing group of operations people to do much more, not attempting to do the same amount of work with significantly fewer people. The magical thinking that the NSA can actually put in automation sufficient to do away with 90% of their system administration staff belies some fundamental misunderstandings about automation. I’ll tackle the two biggest ones here.
1. Automation replaces people. Automation is about gaining leverage–it’s about streamlining human tasks that can be handled by computers in order to add mental brainpower. As James Turnbull, former VP of Business Development for PuppetLabs, said to me, “You still need smart people to think about and solve hard problems.” (Whether you agree with the types of problems the NSA is trying to solve is a completely different thing, of course.) In reality, the NSA should have been working on automation regardless of the Snowden affair. It has a massive, complex infrastructure. Deploying a new data center, for example, is a huge undertaking; it’s not something you can automate.
Or as Seth Vargo, who works for OpsCode–the creators of configuration management automation software Chef–puts it, “There’s still decisions to be made. And the machines are going to fail.” Sascha Bates (also with OpsCode) chimed in to point out that “This presumes that system administrators only manage servers.” It’s a naive view. Are the DBAs going away, too? Network administrators? As I mentioned earlier, the NSA has a massive, complicated infrastructure that will always require people to manage it. That plus all the stuff that isn’t (theoretically) being automated will now fall on the remaining 10% who don’t get laid off. And that remaining 10% will still have access to the same information.
2. Automation increases security. Automation increases consistency, which can have a relationship with security. Prior to automating something, you might have a wide variety of people doing the same thing in varying ways, hence with varying outcomes. From a security standpoint, automation provides infrastructure security, and makes it auditable. But it doesn’t really increase data/information security (e.g. this file can/cannot live on that server)–those too are human tasks requiring human judgement. And that’s just the kind of information Snowden got his hands on. This is another example of a government agency over-reacting to a low probability event after the fact. Getting rid of 90% of their sysadmins is the IT equivalent of still requiring airline passengers to take off their shoes and cram their tiny shampoo bottles into plastic baggies; it’s security theater.
There are a few upsides, depending on your perspective on this whole situation. First, if your company is in the market for system administrators, you might want to train your recruiters on D.C. in the near future. Additionally, odds are the NSA is going to be less effective than it is right now. Perhaps, like the CIA, they are also courting Amazon Web Services (AWS) to help run their own private cloud, but again, as Sascha said, managing servers is only a small piece of the system administrator picture.
If you care about or are interested in automation, operations, and security, please join us at Velocity New York on October 14-16. Dr. Nancy Leveson will be delivering a fantastic keynote on security and complex systems.
The App Store model has increased the uncertainty of the software release process
The recent unavailability of the Apple Developer’s Portal just underscores how increasingly dependent developers have become on third parties during the software lifecycle. For those who are not following the fun and games, the developer.apple.com sites, which include much of the functionality needed to develop Mac and iOS applications, has been unavailable for more than a week as of this writing. Although iTunes Connect, the portal used to actually deploy apps to the App Stores, has remained available, the remainder of the site territory has been off-limits. This is all thanks to a security intrusion (evidently by an over-zealous researcher.)
The App Store model has fundamentally changed how software is distributed, mostly for the better (IMHO), but it has also removed some of the control of the release process from the hands of the developers and companies they work for. As I have spelled out previously in my book on iOS enterprise development, the fact that Apple has the final say on if and when software goes into the store has required more conservative release timelines. If you want to release on the first of September, you need to count back at least two weeks for “gold master”, because you need to upload the app, potentially go through a round of rejection from Apple, and then upload a fixed version.
Android apps don’t suffer from this lag, because most of the Android stores don’t do any significant checking of the applications uploaded to them. The Devil’s Deal that Apple developers have made with Apple is that in return for the longer wait time to get apps in the store (and having to follow Apple’s rules), they get a de facto seal of approval from Apple. In other words, it is assumed that apps in the iTunes store are more stringently policed and less likely to crash or do harm (deliberately or else-wise.)
The current downtime has brought that deal into question, however. Suddenly, developers who need new provisioning certificates, passbook certificates, or push notification certificates find themselves with nowhere to go. Even if iTunes Connect is available, it doesn’t do you any good if you can’t get a distribution certificate to sign your app for the store. I’m sure that there are developers at this moment who have had their finely tuned release strategies thrown into disarray by the in-availability of the developer portal.
Being essentially at the mercy of Apple’s whims (or Google’s, for that matter) can’t be a pleasant sensation for a company or individual trying to get a new piece of software out the door. The question that the developer community will have to answer is if the benefits of the App Store model make it worth the hassles, in the long run.
OSCON 2013 Speaker Series
Scoping Code to the Data
Every website has its own navigation structure, layout, and audience, but when you strip away these unique attributes of websites, you are left with data– chats, emails, photos– that can be treated uniformly across all websites. Operations on these data like encryption and signing, can be performed with indifference to their context and their contents.
Privly uses data indifference to create the notion of “Injectable Applications,” which are full web applications that are injected into the context of other web application. Since these applications are scoped to data and not layout, their properties are simplified and usable across the web.
In short, if you scope an application to the data, then the cryptography can be viewed in potentially untrusted contexts.
Pre-Distributing Client Code
Privly creates an ecosystem of apps with known properties because it allows us to reason about security uniformly across the web. However, security is only as strong as the weakest attack point, which is why great care must be taken to appropriately distribute these applications. By packaging a set of applications for integration into browser extensions and mobile apps, the code is not re-loaded from a remote source every time the browser loads a new page.
Requiring users to install an extension before they can view content is likely an impediment for any security system looking to gain users. However, since Privly uses hyperlinks to reference the content, it provides opportunity for a hosted fallback application. Depending on the nature of the injectable application, clicking the hyperlink could either present the same application as normally delivered by the extension, or present a prompt to install the appropriate browser extension.