Adobe Systems Inc. and other software makers have made sandboxing technology an important part of the application security strategy, isolating certain processes from interacting with the host machine's system memory. The goal
Requires Free Membership to View
is to stop attackers from reaching critical system files, preventing them from stealing sensitive data. But recently a security researcher pointed out an inherent flaw in the way the technology is being deployed, allowing savvy hackers to bypasssandboxing in Adobe Flash files stored on a user's computer. Sandboxing technology was developed in the 1990s and only now has reached mainstream adoption, said network security expert Anup Ghosh, founder and chief scientist of Fairfax, Va.-based Invincea Inc. In addition to Adobe, sandboxing technology is used by smartphone platform makers to isolate applications from accessing different functions of the device and by some browser makers to isolate the browser's rendering engine. In this interview, Ghosh describes the basics of sandboxing, explains why it is a step in the right direction and points out some of the weaknesses in the current implementations of the technology.
What is sandboxing
and how long have software vendors been using it in one form or another for security?
Back in the late 1990s the Defense Advanced Research Projects Agency (DARPA) funded some work
that developed sandboxes. A sandbox is intended to stop untrusted code from behaving badly. An
important attribute of a sandbox compared to straight application code is that it can allow
imperfect code to run and be exploited, but not cause damage on the host system. That is the role
of the sandbox. That's a very important concept because, for software that doesn't run in a
sandbox, a single flaw can result in a full compromise of the desktop. Now what we see in the
market are sandboxes in mainstream commercial products. Google Chrome deploys a sandbox for their
rendering engine. Adobe Reader X incorporated deployed a sandbox so when you open a PDF file the
rendering engine that Adobe Reader X uses, runs in the sandbox.
So software makers are trying to create an application that runs independent of the operating
system and the server, right?
That's how some sandboxes work and really that's what virtualization does, not so much what
sandboxing does. The Java Virtual Machine (JVM) is an example of that. For example, with
sandboxing, Google recognizes that some of the content you are going to get on a website (a
JavaScript) is going to be malicious. They know there's going to be flaws in the JavaScript engine
that they are not going to be able to account for or know ahead of time. The idea behind the
sandbox is that even if that flaw is there in the JavaScript engine and even if there's exploit
code that exploits that vulnerability, the fact that the JavaScript engine is running within that
Google Chrome sandbox should stop that exploit from succeeding. The idea is to contain that
malicious behavior inside of that sandbox. It's a step in the right direction to enable coders to
not have to right perfect code, which we know they can't. But application-level sandboxes don't go
far enough. The basic design of a sandbox involves trying to mediate any system call they can think
of that can be potentially exploited. As the example, in which the Adobe Flash plugin exploit
shows, they try to blacklist all the different communication protocols that could be called from
Adobe Flash, and they forgot at least one, and who knows how many more. So fundamentally, the
approach of trying to think of everything that could be exploited and then trying to mediate those
system calls is not a robust enough approach. It's not going far enough to isolate untrusted code
that a user might run from their desktop.
It's a step in the right direction to enable coders to not have to write perfect code ... but application-level sandboxes don't go far enough.
Anup Ghosh, founder and chief scientist, Invincea Inc.
When Google Chrome came out it had the sandboxing capability right? They were sandboxing a
number of third-party components?
Actually, they were sandboxing their own renderer. They had not, at that point, supported
sandboxing of third-party components. What's happened since then is Adobe has put in their sandbox
for the Flash plugin for Google Chrome.
We don't see sandboxing with a lot of applications. Is it really difficult for the coders to
create the sandboxing capability?
It does require a redesign and a rewrite of the application itself. If you look at Adobe Reader
X, it is a completely new code base from Adobe Reader 9.x. That's part of the reason why it's not
that easy. It's because of the approach they've taken, which requires them to think about all the
different ways an attacker can run code that's going to try to exploit something on that system.
When they develop a model that says this code is likely to call the file system and we're going to
allow these reads and not those rights, they have to run code around each one of those rights to
block them. In a glaring omission in the Reader X sandbox, they decided not to try to stop code
from reading files and potentially stop a machine from sending them to some remote
server.
We hear sandboxing from time to time when it comes to Android, Apple and even Windows
smartphones. Is that the same basic concept?
Yes. Android employs a version of JVM called Dalvik for running apps. What you are getting is
the Java Virtual Machine as your sandbox when you run an Android app, which is good because JVM has
been around for a long time. The problem is that a lot of apps require permissions that go outside
of that sandbox. When you download an app as a user, you're asked to give permissions to that app
that essentially break the sandbox. Of course, as a user, you're pretty much always going to answer
"yes," because you want to get the full functionality of that app. You'll give it permission to the
GPS or the camera, the microphone or the phone. All these things essentially break the sandbox. So
that model of asking users to grant permissions doesn't protect the user because most of the time
users aren't equipped to make good security decisions.
Adobe is using protocol handler blacklists, which security researcher Billy Rios has pointed
out is a weakness. What is a protocol handler blacklist? Is there an alternative way to block
certain protocols?
They put a requirement on any Flash files that the user loads from disk. The requirement is that
when the Flash file runs, it should not be able to make any outbound communications. The risk they
are worried about is that the Flash file is harboring malicious software and it might be able to
read sensitive documents unbeknownst to the user and then send them out over the network. That was
the security requirement. They implemented a sandbox to prevent that exfiltration of data from
happening. The way they did it was by picking out the various ways that you can send data out. They
enumerated different network protocols. We don't know what they all were, but we do know they
didn't enumerate all of them. Billy Rios knew that wasn't a comprehensive approach, so all he
needed to do is find one protocol that probably is not on their blacklist. All he needed was one
protocol to leak any data he found on the desktop.
Security Management Strategies for the CIO
Join the conversationComment
Share
Comments
Results
Contribute to the conversation