Sergey Nivens - Fotolia

Manage Learn to apply best practices and optimize your operations.

Software security assurance: Marcus Ranum chats with Oracle's CSO

After decades in the hot seat, Oracle's CSO Mary Ann Davidson is still fighting systemic risk and the vulnerabilities of enterprise software.

This article can also be found in the Premium Editorial Download: Information Security magazine: Malware analysis beyond the sandbox:

As the CSO for Oracle, Mary Ann Davidson is responsible for software security assurance rather than IT security: specifically, ensuring that the enterprise software and computer hardware company “bakes security in” to its product and cloud offerings. “It’s an area that has gotten a lot of attention,” she says. “Many customers want to know what we do to engineer security into products and services.”

Davidson has worked for the database powerhouse since 1988, long before Oracle’s “Unbreakable” campaign in 2002, which incited security researchers to target its software. The second largest software provider after Microsoft, Oracle moved into hardware in 2010 when it acquired Sun Microsystems. At the time, Sun Microsystems sold computers, storage systems -- including MySQL and the Java software platform -- among other technologies.

Not one to shy away from challenges, Davidson has held jobs at several Silicon Valley companies and served as a commissioned officer in the U.S. Navy Civil Engineer Corps. She earned her MBA from the Wharton School of the University of Pennsylvania.

She is also one half of the Maddi Davidson writing team. She and her sister co-write the Miss-Information Technology Mystery series: Outsourcing Murder, Denial of Service, and With Murder You Get Sushi.

Marcus Ranum, who has felt the weight of a software release cycle more than once in his career, caught up with Davidson to find out her views on the security development lifecycle and code bases too big to fail.

Marcus Ranum: I’ve been responsible for a software release cycle, but it’s always been for relatively small, perhaps even tiny, software. I’m thrilled to death to ask you a few questions about your experience with security and software at your job: How does your team interface with development regarding security patches and problems in the development lifecycle?

Mary Ann Davidson: The purpose of my team’s interaction with development -- and there is a lot of it, we have over 4,500 products and cloud services -- is to try to avoid security patches and problems in the first place. That’s the purpose of the program we run called Oracle Software Security Assurance (OSSA), which outlines security requirements for product and cloud development lifecycles covering inception -- when code is a gleam in a developer’s eye -- through to product or service delivery and code maintenance. OSSA covers areas such as secure coding and secure development standards, the requirement to have designated security leads (senior security people responsible for large swathes of implementing OSSA into products and services) and security points of contact (more junior people responsible for, say, a component rather than an entire product or cloud area). Those leads and security points of contact are the ‘boots on the ground’ in terms of implementing OSSA. Other requirements of OSSA include the use of automated tools to find security bugs in code, release checklists to ensure that development teams followed our security requirements before releasing products or cloud software, secure configurations and compliance with our vulnerability handling processes.

One of the more interesting metrics is that we find 87% of security vulnerabilities ourselves; the remainder are reported by customers and third-party security researchers.
Mary Ann Davidson CSO, Oracle

Obviously, no software is perfect, so when we do find a problem, or one is reported to us, we have requirements for labeling the bug a security bug, scoring it using the Common Vulnerability Scoring System (CVSS) base score and having granular access control on the bug to enforce ‘need to know.’ We keep metrics around bug handling to ensure that, all things being equal, we fix the worst things the fastest; hence, CVSS scoring. One of the more interesting metrics is that we find 87% of security vulnerabilities ourselves; the remainder are reported by customers and third-party security researchers.

Generally speaking, we fix security bugs in the main code line first, so that new versions and patch sets pick up the fix. In some cases, generally having to do with security researchers’ reported bugs, we bundle security vulnerabilities into quarterly Critical Patch Updates by which we deliver more time-critical security fixes to supported versions of products and cloud services. In other cases (rarely) when a security vulnerability is very severe, or there are exploits circulating, we do a one-off security fix announced via a Security Alert.

Ranum: How much of an impact do you experience when someone finds a bug?

Mary Ann DavidsonMary Ann Davidson

Davidson: Well, there are multiple impacts. In the case of exploits circulating in the wild where a bug is a candidate for a security alert, it’s an ‘all hands on deck’ exercise. We obviously -- especially in the cases of the worst bugs -- want to avoid those in future. Part of remediation includes things like looking for other, similar issues in code; possibly making additions to the secure development standards; adding test cases to tools we use to find other similar bugs and make sure the bug is not reintroduced later; doing debriefs of development so ‘lessons learned’ are broadly learned.

One impact which a lot of people felt last year is that of vulnerabilities in third-party, often open source, libraries that are widely used. In the past, a lot of people would use third-party libraries and either not upgrade them regularly or keep track of where they were used, or both. A number of widely-publicized issues -- POODLE, Heartbleed, FREAK [factoring attack on RSA export keys] -- in third-party libraries have woken people up. You need to ensure that you are on current, or reasonably current, libraries that will be supported for the life of the product in which they are embedded, and that you have a good inventory of where all these libraries are used.

Ranum: I like those steps. They remind me of Tom Van Vleck’s ‘Three Questions About Each Bug You Find.’

I’m really hot on metrics. What kind of data do you present about your program, and does it justify itself? I’m assuming it does, because I know you’ve been doing this seriously for a while. Usually whenever I talk about software security to an executive I get back ‘but it’s hard.’ We all need better ways to make that case. How do you do it?

Davidson: Yes, it is hard. For example, it’s hard to use metrics to prove what didn’t happen: ‘Because we trained X numbers of developers, we avoided Y vulnerabilities.’ For us, security is a brand issue; customers expect us to protect their data as if it were ours. But it is also cost avoidance. Because we have so many products and supported versions, and they run on so many operating systems, even one vulnerability can require us to create a lot of patches. Then, of course, we have to apply those patches to our own systems and our cloud services. Thus, ‘improving security’ is a good investment all around.

Many of our metrics are focused around security vulnerability handling since -- all things being equal -- customers want to know you fix the worst stuff the fastest. We want to avoid security vulnerabilities in the first place, or else find these ourselves and fix them before hackers find them. But you need to be careful not to create perverse incentives with metrics, like penalizing teams for finding issues in their own code.

Another area of metrics is ‘scoring what we do against what we say we do’ vis-a-vis our OSSA program, which we do quarterly and report to executives twice a year. We are experimenting with these [reports] to find ways to go beyond red-green-yellow scoring. For example, how can we spot problem children -- those product teams struggling with security -- faster? Also, can we provide some leeway to more mature development teams; for example, targeting some OSSA requirements for a deep dive where they feel the resource is better spent. Metrics are like pantyhose: one size fits all usually doesn’t.

Ranum: At the risk of making your head explode, at what point do you think we need to start thinking of certain code bases as ‘critical infrastructure’? Among the companies I’d consider on that list -- Oracle, Microsoft, Adobe, Apple and Cisco, in no particular order -- I think the situation is qualitatively better than it was 15 years ago, but the problem is bigger. What would be your advice to someone taking ownership of a code base that was ‘too big to fail’?

Davidson: I don’t think it is the size of the code base per se that is the issue. I think it is a failure of many to understand: 1) the nature of all software in general; 2) commercial software in particular; and 3) lack of consideration for systemic risk. I am amazed at how many requests I get for guarantees that our software is ‘free of defects.’ The nature of software is that even with very good engineering practices, good quality assurance, security-aware developers and the like, the one thing you can assert is that all software will have defects; some of those will be security relevant, and some of those will be serious: There is no perfect code. Better code is desirable, achievable, and economically in vendors’ -- not to mention customers’ -- interests, but it is never going to be perfect.

Second, commercial off-the-shelf (COTS) software is very good general-purpose software, but it was not designed for absolutely all threat environments -- an important limitation. Many entities use COTS software because it is relatively cheap, reliable, maintainable and so forth, and, of course, many are moving to cloud services to avoid direct IT costs, but they forget that ‘not designed for all threat environments’ caveat. Think about squirt guns. They are really good at soaking your big brother, but they won’t do anything to protect you against an angry bear; you might make him angrier. More importantly, there is no magic security pixie dust that a system integrator can sprinkle on squirt guns, hand them to the U.S. Marines, and say, ‘These are M4s: go fight the enemy.’…Why do people think software is different? ‘General purpose’ is not ‘all-purpose.’

Third, making absolutely everything Internet accessible carries the potential for creating systemic, and thus ‘unmitigateable,’ risk. Think about the complexities and ‘unknowns’ about software in general, and now add in extreme interconnectedness.

Consider a ‘smart refrigerator’ that, say, reorders milk for you when it detects your milk is past its sell-by date. …What if a hacker can detect that you haven’t opened your refrigerator door in seven days? Clearly, you are either dead, on a strange diet or…you aren’t home and yours is a good house to burgle. To say nothing of having a worm infest your fridge (the digital kind, not the ‘I forgot to rinse the lettuce’ kind) so you are locked out. Oh, and that fridge you thought would last for 20 years now has to be scrapped in five because you can’t upgrade the software.

We are creating fragility and hackable interconnectedness that can only be avoided by not creating the problem in the first place.

The rush to IP-enable absolutely everything reminds me of someone trying to design a hand grenade with a child-proof pin. You want to say, ‘Are you nuts? Think about what can go wrong if you don’t get the security right.’ That’s called avoiding systemic risk.

I know how busy you are, so I'll let you go on that note! Thank you so much for your time. 

Davidson: I’ve always found security to be fascinating on multiple levels: everything from public policy issues to the geeky bits. Like military history -- a fascination I know we both have -- you sometimes want to shake your head and wonder if we will ever learn the lessons of the past, or just keep making the same mistakes on a bigger scale and with much bigger consequences. That’s also what keeps me highly motivated to do what I can to make a difference.

About the author:
Marcus J. Ranum, CSO of Tenable Security Inc., is a world-renowned expert on security system design and implementation. He is the inventor of the first commercial bastion host firewall.

This was last published in June 2015

Dig Deeper on Data security technology and strategy

PRO+

Content

Find more PRO+ content and other member only offers, here.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchCloudSecurity

SearchNetworking

SearchCIO

SearchEnterpriseDesktop

SearchCloudComputing

ComputerWeekly.com

Close