Adobe: Automatic updates and creating 'perfect' software

Brad Arkin discusses Adobe's addition of automatic update downloads for Reader and Acrobat, and why it took Adobe so long to offer automatic updates. Plus he tackles the feasibility of making "perfect" software.

View more in this series:

Read the full text transcript from this video below. Please note the full transcript is for reference only and may include limited inaccuracies. To suggest a transcript correction, contact    

Adobe: Automatic updates and creating 'perfect' software

Interviewer: Let us talk about some of the other things that we have seen. One is
the, and you already mentioned it, it is the automated
downloader . . .

Brad Arkin: Yes.

Interviewer: . . . to push out patches. Can you talk a little bit about that?

Brad Arkin: Sure. When we looked at some of the security challenges around
Reader and Acrobat, and the fact that it is so much in the
spotlight right now, one of the things that jumped out at
us is how important it is that users stay up to date. The
vast majority of people that have ever experienced an
attack against their system via a malicious PDF, was using
an out-of-date software. Although we are doing lots of work
to make Reader harder to attack, keeping people up-to-date
could solve such a huge part of the problem. We studied why
some people don’t update, or why they do not stay up-to-
date, and there’s lots of little things that we
identified, that we can improve on.

We completely rewrote the update mechanism for Reader and
Acrobat, and we shipped that, as a pilot, in the October
2009 release. When we shipped the security update in
January, then again we had a band update, I
think in February 2010, and we learned from that, we
continued to improve the updater. Then in April of 2011, we
turned it on for all users, so that was the first update
that everybody received through the new mechanism. The user
experience design folks would explain to you a million
things they changed, that make you more likely to update,
so when it notifies you of an update, the text that it
uses, the way that it notifies you, these have all changed.
The actual mechanics under the hood of how we have
implemented it are all brand new. The way it attempts to
check, and if your wireless network did not happen to be
active at that moment, it will wait then check in a little
bit instead of waiting a long time.

The biggest changes that end-users will notice, is that we
have created a third option. Whereas, before you had fully
manual, then what we call semi-automatic, where it would
download, but still give you a, 'yes/no, would you like to
install now?' prompt. Fully automatic will silently, if
enabled, reach out, if an update is available, download it,
it will install it, all without any user interaction, then
it will provide a little notice in the icon tray, the
system tray at the bottom. It’ll say, 'Hey, you have been
updated,' then the next time you restart, not the system,
but just the Reader, that new update takes effect.

Right now, any users can go into Reader preferences and
switch to this fully automatic, on Windows, it is only on
Windows. We did rewrite the entire Mac Updater as well, but
because you have to enter a password to install software on
a Mac, the user experience, it would not have worked doing
it fully silent mode. We have a goal of moving the whole
user base towards the fully automatic mode at some point in
the future, but we do not want to do this without giving
user notification and the choice to choose. We will always
support a fully manual and the semi-automatic mode as well,
because on a percentage basis, we want most people to use
fully-automatic, but on absolute number, there is lots of
people that have reasons why they do not want to get
updated without them knowing about it, so we need to
support them, as well.

Interviewer: Why did this kind of update feature not exist in earlier versions,
with earlier versions of the software? Why did it take so
long to get to this point?

Brad Arkin: For us, the motivation, when we did this work on the Updater now, was
around helping people stay up-to-date as a security
precaution, and the threat landscape really had not changed
to the point where Reader was getting so much attention
from the bad guys, so it was not an issue, we did not
invest in that area because it just was not a big problem,
we were focused on other areas. With the shift in the
current landscape, the big focus for us now is taking a big
step back and saying, 'what can we do to keep people safe?" The
Updater was one thing that we could do. In the scheme of
some of the things we were working on, it was relatively
cheap, so that got out early. Something like Sandboxing is
much more expensive, so it took a lot longer. Helping
people stay up-to-date is, ROI measure is really impactful,
so that is why we prioritized it.

Interviewer: How about internally? What do you do, in terms of testing Adobe
Reader and Acrobat, testing the code?

Brad Arkin: We got our secure product life cycle, and this SPLC is
analogous to Microsoft's STL. I think it is about 85
different activities, or milestone, or review points that
happen from design, modeling, and coding; all the way
through. When you look at the security assurance part of
it, we do things that really start from the very beginning.
The first time someone says, 'We got a new idea for a
product,' there is a security review that happens.
Occasionally we will say, 'This is simply not possible to
do and keep it secure, so let us revisit,' just on day one.
For a product like Reader, Acrobat, or Flash Player, these
products that have a long history already, we are not
starting from a clean slate, so we do a health check and a
risk assessment, where we say, 'What kind of code are we
dealing with? When was it written? What does the quality
look like? What can we do in order to measure what this
looks like in the threat environment that we anticipate
that it is going to be employed into?' Right now,
applications are getting a lot of attention at the app
player, so we know we are going to see lots of attacks
against anything that is widely deployed, so it needs to be
really robust.

Then we say, 'OK. Given that, let us look at some other
things,' then that informs what other steps that we follow
throughout the rest of that development cycle. We may say,
'We need a new Updater. We need a JavaScript blacklist framework, or layout some other features,' and we also lay out general security quality steps that we are
going to take. During the early phases we do things like
threat modeling, we also do work during the actual coding
phase, we do work around static code analysis, so we use
multiple different static code analysis tools in order to
look through the code in an automated fashion and flag
things that we want to look at manually. We also do manual
code reviews of important components.

On the testing side, there is a bunch of different types of
security testing. There is spec-driven, where you have a
spec that says, 'Here is what it is supposed to do,' then
we build out a security test plan that is a chapter within
the broader, overall test plan. So we say, 'If it is
supposed to have this functionality and nothing else,' we
try to do things, edge cases, all sorts of creative ways to bring it into a state it should not go into. We also do automated testing, a lot of people call it
fuzzing, and we will just throw a bunch of garbage at APIs,
see if we can trigger some type of fault, if so, then we go
back and figure out what we need to do to the code.

There's a lot of commercial scanners, that if you are
building a server product that are useful in reviewing for
potential things, like a sequel injection, cross side scripting, those are less useful for desktop products. We also do a lot of work with third party consultancies, so even though we have dedicated, in-house testing staff that are fully trained on the security-side
of testing, there is always new ideas and new innovations
that are being developed, we have a long vendor list that
we work with, and we are always bringing in different folks
that have different experience in environment, in order to
test, to see if they can catch stuff that we have missed so
far. The goal from all of this is that we will have a
really good understanding of what it is that we are ready
to ship to the field, and confidence that it is going to
live up in that environment. That is all part of our SPLC.

Interviewer: With all of that work that you have just outlined, with all of the
fuzzing, and all of the other testing that you do to the
code, you are still going to find bugs, right? Bugs are
still going to get out there.

Brad Arkin: This stuff involves humans, so we know, from the research
literature, what it takes to make perfect, provable
software, and it is not feasible on a commercial scale.
They do this for safety-critical stuff, and these are apps
that . . . I read somewhere that code on a Pacemaker has
something like 40,000 lines of cCode, and when you are
looking at the millions of lines of code that go into a
major commercial product, like the stuff that we ship,
there is going to be faults in there somewhere, despite all
our best efforts, and that is why there is so much
investment on the response-side of things. The ability to
quickly and efficiently intake potential vulnerability
reports, whether real world attacks or just research
results from individuals, then convert that into an
improved product that mitigates the risk, and any guidance
in between, before that patch is available. It is just a
fact of life in the security industry, so it is something
we have to be ready for.

View All Videos

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.