alex_aldo - Fotolia
Published: 02 Nov 2015
The continuous development model of Agile -- with no end in sight -- heightens the need to address secure development practices as teams crank out software iterations to meet constantly evolving business requirements.
Oversight of secure application development also extends beyond internal systems to a wide range of external components and third-party services. The real issue may not be how application security adapts to our changing landscape, says Sam King, chief strategy officer at Veracode. It’s how to find application security practitioners who can navigate the technical side -- and the business side -- of their role.
“In our experience, reducing the risk of application breaches requires insight into your organization’s business practices, visibility into your supply chain, and the ability to find common ground with development teams, both inside and outside the organization,” she says.
Recognized by Mass High Tech as a Woman to Watch, an award that honors contributions of women in technology and life sciences, King is a frequent speaker at the RSA Conference and Gartner Security Summit, among other industry events. In her current position, she is responsible for product management, marketing, corporate development and the company’s customer-facing system architects. Marcus Ranum caught up with King to find out more about her views on Agile and secure application development .
Marcus Ranum: I did an interview the other day with Jim Routh [Aetna CSO] in which I asked him a bunch of nosy questions about how his team injects security into an Agile software development process. Have you observed an improvement or worsening of security as a result of Agile? What is your experience with code in the wild, seeing as you’ve got a rather high-level view of the output of development processes?
Sam King: I don’t see Agile and security as orthogonal or opposing concepts. All software -- regardless of the development methodology used to produce it -- could and, in fact, does have security vulnerabilities. Our data shows that there is very little variability in first-scan pass rate by lifecycle stage against a standard like the OWASP Top 10, regardless of whether the application is just entering alpha in 2015, or if it has been deployed for many years. There’s a perception that Agile doesn’t embrace secure coding practices because the emphasis is placed on speed to deliver functionality versus secure functionality. However, the core of what makes Agile attractive lends itself to security quite well. The focus on rapidly delivering features based on customer needs means that as security becomes more important, it becomes a customer priority that makes its way into the Agile process.
We’ve seen two different approaches be successful. The first is figuring out how security teams can better work with the development teams. Generally, process innovations like standard security design and review tasks; inclusion of security teams in story grooming, planning and retro ceremonies; and the creation of developer security champions bear fruit. The second approach we’ve seen is automating application security testing at multiple stages of the workflow, from the developer desktop through to integration-test time via build server integration, and even in production.
Another shift we’re seeing is newer software projects that have been built from the ground up tend to have fewer issues integrating security into their Agile workflows because they can design these proactively from the start. No Agile team wants to go through nimble rapid development cycles only to hit a big security gate that prevents them from going live, so they’re building security review and testing criteria into their ‘definition of done’ for features and sprints. They’re even asking questions of testing providers about their ability to handle new architectural patterns, such as those in which Web services built in Java, .NET or Node.js support both cross-platform mobile apps and single-page Web applications.
Ranum: Back in 1994 I shipped a firewall product that used syslog( ) all over the place, and (of course) it turned out that the syslog( ) formatting bug attack could be used against it. That was a huge reality check for me. Ever since, I’ve realized that as an industry we have a problem with not just code quality, but with component quality. It’s common nowadays to mash up applications using entire programs as components -- and the resulting application inherits a bug stack consisting of the sum of the bugs in the components plus any interactions between them: It gets nasty fast. How does that play with Agile? What about the mashup development model?
King: We see the rise of Web service-based mashup applications as the logical extension of the trend toward software construction via component assembly. Twenty years ago a development team would have built all the infrastructure and utility routines of an application in house, or relied on a single platform provider -- like Sun, Microsoft, or [the company’s] Unix system vendor -- to provide that plumbing. Then came the rise of component-based software development, and development velocity increased tremendously. Yet Heartbleed and Shellshock proved that this velocity doesn’t come for free; it’s offset by security debt, in the form of latent risks in the software supply chain. We’re only now seeing some leading industries, like financial services, identify supply chain risk as an explicit security issue that needs its own control strategy. (The Financial Services Information Sharing and Advisory Center, FS-ISAC, has a great white paper on the topic of third-party software risk that’s worth checking out.)
Sam KingChief strategy officer, Veracode
So what’s different about Web service-based mashups? Technologically, there’s a looser coupling of the components, which makes static analysis testing of the whole application harder. You have to apply multiple testing strategies to attempt to prevent risk from entering your environment. [You have to] look at the security of each individual component using static analysis and software composition analysis, and then look at the end-to-end system using a runtime dynamic or behavioral test. And you have to be prepared to do that across organizational boundaries and, specifically, tackle that risk in your supply chain, since you don’t own all the code you’ll want to test. Lastly, you need to have some protective controls in place, in case you haven’t caught everything in the prevention phase.
This is challenging to do for loosely coupled services because the problem extends across organizational boundaries and into your suppliers. Fortunately, software isn’t the first industry to grapple with supply chain issues; a lot of principles can be learned from other supply chain transformation initiatives. You need an agreed-on set of quality standards, compliance initiatives with teeth, a way for vendors to signal compliance with those standards, a way to test for compliance that everyone agrees on, and a clear value proposition for both the enterprise and the supply chain to make it work.
We are starting to see some of those pieces come to fruition in the context of vendor-supplied applications, between the FS-ISAC recommendation for binary static testing, software component analysis, and VBSIMM (or the equivalent, OpenSAMM); market standards for testing like OWASP, the CWE/SANS Top 25 Most Dangerous Software Errors, and Veracode’s Verafied seal; inclusion of software and supply chain security in the PCI standard; and the threat of federal lawsuits for inadequate cybersecurity protection. For mashup applications that leverage third-party Web services, this model -- and some of these specific (risk avoidance) strategies -- may prove helpful for organizations trying to get their arms around this risk.
Ranum: What is your biggest challenge in getting organizations to embrace software security into their lifecycles? It seems to me that perception is a big piece of it: Waterfall is too slow. Software security is too slow. Agile is fast [but] insecure. From where you sit in the software chain, I’m guessing you’ve got an actual view of what the situation really looks like?
King: I agree with your characterizations of the perceptions of the various development methodologies and software security. We should recognize though that they are exactly that -- perceptions. The reality is that regardless of the development methodology used we see a lot of software with security flaws in it. The biggest challenge that I see is that if organizations approach the software security problem as solely a technology implementation problem, they will have a hard time getting the organization to embrace it.
It is not about deploying a tool or doing penetration tests at a recurring frequency on a subset of your applications; [it’s] a change management problem where you are trying to integrate the needs of development -- really, your business -- and of security. We still see clashes that hit at the heart of both groups’ objectives, such as security conducting a penetration test or running scans at the 11th hour and insisting that the development team drop everything to address all the findings before they ship. At the same time, we have also seen some great work done to get developers and security teams on the same page, by aligning their incentives. [As] with any large-scale change management program, get executive buy-in, identify quick wins and keep the momentum going through continued dialogue and tuning as the organization learns and grows.
Ranum: This is a favorite question of mine because I keep getting asked it and I don’t have great answers: If you inherited a software development organization, what would be the first steps you’d take to start establishing whether security was a concern and maturing the development organization’s software security? (I’m assuming that virtually nobody inherits an organization that’s got a mature and effective software security lifecycle.)
King: A really simple question to ask is: Who is the designated security person on the development team? If you get a name, that’s a good sign that they thought about the problem enough to assign ownership to it. If you are told that everyone on the development team owns it, then it is a safe bet that no one does. Don’t get me wrong -- every developer ultimately needs to be responsible for the security of the code they are producing, but we are still early in that journey. For now, having a person that champions this cause inside a development team and helps institute the right process, policy and technology is the way to get it incubated inside the organization and get developer adoption.
Another good way to assess whether security is a concern or not is to test the development team’s practical knowledge of security concepts. Our data shows that while 56 to 60% of developers answer security awareness questions correctly, only about 40% can correctly answer questions about secure development practices. While developers may have an understanding of security threats, they lack the skills to translate them into secure development. See how you compare to this benchmark.
We find it’s really important to get people using the tools early and to establish a success story. Then you can tell their success story as a proof point within the organization. Developers trust their peers and the experience they’ve had more than anything else, and so establishing a real proof that the security assessment process works and is non-disruptive is critical.
The bottom line is that the problem of application security is broader than just testing applications. In our experience, reducing the risk of application breaches requires insight into your organization’s business practices; visibility into your supply chain; the ability to find common ground with development teams, both inside and outside the organization; and [then] finding ways to stay current with and adjust your approaches to rapidly changing technology landscapes and application development methodologies. That’s a big remit. In fact, the real issue may not be how application security adapts to this changing landscape, but how you find application security practitioners that can navigate both the technical and business side of their role. The question in my mind is: How can we as an industry find or develop enough of these special yet much needed skills to meet this demand?
About the author:
Marcus J. Ranum, the chief of security at Tenable Network Security Inc., is a world-renowned expert on security system design and implementation. He is the inventor of the first commercial bastion host firewall.
Four tips to build a secure development lifecycle
Add an extra layer of malware defense with application whitelisting
Marcus Ranum goes one-on-one with Gary McGraw about software design and the top 10 security flaws