Scott Charney: Microsoft security policy and collective defenseDate: Feb 22, 2011
Read the full text transcript from this video below. Please note the full transcript is for reference only and may include limited inaccuracies. To suggest a transcript correction, contact firstname.lastname@example.org.
Scott Charney: Microsoft security policy and collective defense
Eric Parizo: Hi, I'm Eric Parizo. It's great to have you with
us today is Scott Charney, Corporate Vice President for Trustworthy
Computing at Microsoft. Scott, thanks so much for joining us today.
Scott Charney: Thanks for having me.
Eric Parizo: Scott, let's kick things off by talking about Microsoft's
concept behind collective defense. It's a concept you have advocated
strongly for over the past couple of years. Briefly, tell us what it is.
Scott Charney: Essentially sometimes we rely on people to protect
themselves, but sometimes we think about how people and organizations can
combine together to provide higher levels of protection. So for example,
you can lock your own doors and you can protect your house, or you can have
a neighborhood watch where each neighbor looks out for each other, and the
internet is somewhat the same way. We have told people to turn on
firewalls and run on antivirus and protect themselves. But increasingly
what we need to do is share information and figure out how we can help each
other protect the broader ecosystem, and the idea of collective defense is
to do that.
Eric Parizo: The important part of the collective defense paradigm is the
concept of computer health checks. Explain those for us briefly.
Scott Charney: It was really an evolution of my own thinking. Last year at
RSA, I talked a little bit about collective defense and comparing computer
health to human health models. And I had mentioned the concept that if ISP
scan machine is looking for malware, they could be more proactive about
protecting consumers, protecting the ecosystem. But as I spent more time
thinking about it and discussing it with others, that model has some
challenges. It puts a lot of burdens on the ISP's, of course, and
consumers may not want their machines scanned by any third party. We
started to recognize that if we put users in control and let them present
health certificates, you could provide a model where they could basically
attest to the health of their machine. And not just to an ISP, but to
anyone who asks, or they choose not to present the certificate. So we kind
of moved to a user-controlled model that actually has a broader distributed
application, and that's very much kind of the fiber of the internet to be
massively decentralized and distributed.
Eric Parizo: Now there have been a number of skeptics, when it comes to
collective defense for a variety of reasons. First off, the concept of the
internet health checks is primarily geared toward consumer users. Correct?
Scott Charney: That's correct. Enterprises actually do collective defense.
You have CIO's who manage a number of machines on behalf of the
enterprise, but you don't have a CIO for the consumer.
Eric Parizo: How does the concept compare to what, say an ISP like Comcast,
is doing with its bot notification program, for its customers?
Scott Charney: Those are actually important programs, and we see it
happening here and also in Australia, where they have a voluntary code of
conduct. The main difference is those programs tend to be reactive. That
is they look for known infections and then notify someone that you have an
infection. That's an important thing to do. There will always be a
reactive component, but what makes this different is you can actually ask
someone, 'Do you have the protections in place that will help you prevent
being infected in the first instance?' So instead of just being reactive,
this adds a proactive, preventative component. It's kind of like how we
vaccinate people or teach them to do healthy things like wash their hands,
so they don't contract disease in the first instance.
Eric Parizo: Who would be in charge of deciding what constitutes malware or
what constitutes a remediation incident?
Scott Charney: As a general rule, most of the detection today is done on
known signatures, and that is the model that I think we would certainly
start with. The challenge with anomaly detection has always been the false
positive rates. Sometimes you see an anomaly, and it may not actually be
malware. It could be someone just doing something new. But when you limit
it to known signatures, the risk of false positives goes way, way down and
yes, by doing that you increase your exposure a little bit to something
new. But this model allows us to quickly respond when we identify new forms
of malware. Also it's important to remember this is about risk management,
not risk elimination. So you want to kind of strike the right balance
between preventative health and being overly prescriptive in the first
Eric Parizo: Of course, some say that the concept of collective defense, is
an admission by Microsoft that it hasn't done a good job with securing its
own software. How do you respond to that?
Scott Charney: Well, obviously we take that seriously, and we have been on
a path for many years now to implement the security development life cycle,
and drive a number of vulnerabilities down generation over generation.
Having said that, however, there's a couple of things that we just have to
put on the table. One is we will not get vulnerabilities to zero.
Software products tend to be complex. They're made by humans and there is
only so much testing you can do. The second thing is, even if you got the
vulnerabilities to zero, which we can't do, users will sometimes make
configuration errors. And even if that vulnerability is to zero and
perfectly configured to your machines, what we've seen is attackers move up
the stack to the end user, and engage in social engineering, and get people
to click on attachments and infect themselves. So you need this, not just
because of vulnerabilities in systems. You need this to be thorough in a
threat environment that has many attack tactics.
Eric Parizo: So does the technology exist today to support this?
Scott Charney: The technology actually does. In the enterprise, we have
something called NAP, Network Access Protection, and basically it does look
for certain elements. Is your antivirus up to date, are you consistent
with company policy, and it also can quarantine and remediate. We never
scaled that to the consumer side though. And that I think is where the
opportunity lies. We are starting to look at how we could employ these
models broadly, and while we're doing that in consultation with those ISP's
that are starting to look at these technologies as a way to help protect
Eric Parizo: Let's talk about practical application for a minute. If an
enterprise decides, 'Hey, this is a great paradigm, I want to implement it
tomorrow.' How do they go ahead and do that?
Scott Charney: They would start putting that infrastructure in place. There
are a couple of components you need that we are working on. One is Windows
Security Center actually already has an API, which you can access and get
information from. Then you also have to think about how robust the client
is going to be and how on the enterprise side, you define what policies you
want, what things you're going to check for, and what you're going to do if
it doesn't pass the check. So there's some technology and logic that has to
be built, but I think a lot of the infrastructure is in place today.
Eric Parizo: How long do you think it will be until Fortune 500-size
enterprises have systems like this in place?
Scott Charney: Well, some enterprises already have it for their
organizations, we do it at Microsoft today. To get into the consumer
spaces takes a little bit of time, in part because you want to show
consumers the value proposition behind this. Studies show that a lot of
consumers actually do not run antivirus today and take some of those
preventative measures. These approaches can certainly help catalyze that,
but we do want to make sure that there's social acceptance and people
understand both what you're doing and what you're not doing, like you're
not surveilling their machines.