How to develop software the secure, Gary McGraw way
A comprehensive collection of articles, videos and more, hand-picked by our editors
The Building Security in Maturity Model is chock full of hardcore software security goodness as practiced by some...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
of the most successful companies on earth. But there are too many activities to take on all at once without overwhelming the ship. Which BSIMM activities are the most important? And more critically, if you are just getting started, which BSIMM activities should you adopt first?
If you are getting started with the Building Security in Maturity Model (BSIMM), you should consider the most common twelve activities, all of which are straightforward and easy to adopt.
There are a few nits in the ointment with this oversimplified twelve-step plan. First of all, the BSIMM has 111 activities, so we're severely limiting our view here if we only focus on twelve. Second, just because every other firm in the world is carrying out the twelve activities we describe here, that doesn't mean they will work for your firm. But that's OK. We’re going to forge ahead anyway.
Of the 111 activities observed in BSIMM4, there are twelve activities that at least 32 of the 51 firms we studied carry out (63%), one identified in each practice. Though we can't directly conclude that these twelve activities are necessary for all software security initiatives, we can say with confidence that these activities are commonly found in highly successful programs. This suggests that if you are working on an initiative of your own, you should consider these twelve activities particularly carefully (not to mention the other 99). In addition, if you are planning a new software security initiative, the following activities are probably a good place to get started.
First some quick review. The BSIMM is the result of a multi-year study of real-world software security initiatives. The latest version, BSIMM4, was built directly out of data observed in fifty-one software security initiatives from firms, including: Adobe, Aon, Bank of America, Box, Capital One, The Depository Trust & Clearing Corporation (DTCC), EMC, F-Secure, Fannie Mae, Fidelity, Google, Intel, Intuit, JPMorgan Chase & Co., Mashery, McKesson, Microsoft, Nokia, Nokia Siemens Networks, Qualcomm, Rackspace, Salesforce, Sallie Mae, SAP, Scripps Networks Interactive, Sony Mobile, Standard Life, SWIFT, Symantec, Telecom Italia, Thomson Reuters, Visa, VMware, Wells Fargo and Zynga.
The BSIMM is a measuring stick for software security. The best way to use the BSIMM is to compare and contrast your own initiative with the data presented in the BSIMM. You can then identify goals and objectives of your own and look to the BSIMM to determine which further activities make sense for you. The BSIMM data shows that high maturity initiatives are well rounded, carrying out numerous activities in all twelve of the practices described by the model. The model also describes how mature software security initiatives evolve, change and improve over time.
A descriptive view of 111 software security activities
The BSIMM model was derived directly from data gathered through first-hand observation. Throughout the course of making nearly 100 distinct measurements (some firms measured twice, some firms include subsidiary measurements that roll into a single firm score), we have identified 111 activities. The most direct way to report this data is to show the number of times each activity was observed among the data set of 51 firms.
The table above shows the number of times each of the 111 activities was observed in the BSIMM4 data. An expanded version of this chart can be found in the BSIMM document itself (available for free under the Creative Commons license). The BSIMM document also meticulously describes each of the 111 activities. Here, we're sticking with the twelve most popular activities.
As you can see in the table, twelve of the activities are highlighted. Each highlighted activity is the most commonly observed in its practice, having been observed in at least 32 of 51 firms. That means each of the twelve activities is very popular and in common use in real-world software security initiatives.
Know that although the twelve activities we're covering are common, they may not make sense for your firm (for cultural, budgetary, or other reasons). However, the BSIMM data describes what is actually happening in the world today when it comes to software security, and thus provides very useful guidance from seasoned software security professionals.
Twelve core BSIMM activities
Without further ado, here are the twelve most common BSIMM activities. (We preserved the somewhat obscure BSIMM labels so you can check them out later in context of the entire BSIMM model.)
- SM1.4 Identify gate locations, gather necessary artifacts;
- CP1.2 Identify PII obligations;
- T1.1 Provide awareness training;
- AM1.5 Gather attack intelligence;
- SFD1.1 Build and publish security features;
- SR1.1 Create security standards;
- AA1.1 Perform security feature review;
- CR1.4 Use automated tools along with manual review;
- ST1.1 Ensure quality assurance (QA) supports edge/boundary value condition testing;
- PT1.1 Use external penetration testers to find problems;
- SE1.2 Ensure host and network security basics are in place; and
- CMVM1.2 Identify software defects found in operations monitoring and feed them back to development.
Detailed descriptions of each of the twelve activities, including real examples taken directly from the BSIMM data, can help bring these activities to life.
SM1.4 Identify gate locations, gather necessary artifacts: The software security process will involve release gates/checkpoints/milestones at one or more points in the software development lifecycle (SDLC) or, more likely, the SDLCs. The first two steps toward establishing release gates are: 1) to identify gate locations that are compatible with existing development practices, and 2) to begin gathering the input necessary for making a go/no-go decision. Importantly at this stage, the gates are not enforced. For example, the software security group (SSG) can collect security testing results for each project prior to release, but stop short of passing judgment on what constitutes sufficient testing or acceptable test results. The idea of identifying gates first and only enforcing them later is extremely helpful in moving development toward software security without major pain. Socialize the gates, and only turn them on once most projects already know how to succeed. This gradual approach serves to motivate good behavior without requiring it.
CP1.2 Identify PII obligations: The way software handles personally identifiable information (PII) could be explicitly regulated, but even if it is not, privacy is a hot topic. The SSG takes a lead role in identifying PII obligations stemming from regulation and customer expectations. It uses this information to promote best practices related to privacy. For example, if the organization processes credit card transactions, the SSG will identify the constraints that the PCI DSS places on the handling of cardholder data. Note that outsourcing to hosted environments (e.g., the cloud) does not relax a majority of PII obligations. Also note, firms that create software products that process PII (but don't necessarily handle PII directly) may provide privacy controls and guidance for their customers.
T1.1 Provide awareness training: The SSG provides awareness training in order to promote a culture of security throughout the organization. Training might be delivered by members of the SSG, by an outside firm, by the internal training organization, or through a computer-based training system. Course content is not necessarily tailored for a specific audience. For example, all programmers, quality assurance engineers and project managers could attend the same Introduction to Software Security course. This common activity can be enhanced with a tailored approach to an introductory course that addresses a firm's culture explicitly. Generic introductory courses covering basic IT security and high-level software security concepts do not generate satisfactory results. Likewise, providing awareness training only to developers and not to other roles is also insufficient.
About the [In]Security column
This monthly security column by Gary McGraw started life in print in IT Architect and Network magazines and was originally called “[In]security.” That was back in October 2004. The column then transitioned into Web content at several publications before finding a home at SearchSecurity. You can always find pointers to the complete [In]security series on McGraw’s writing page. Your feedback on the column is greatly appreciated.
AM1.5 Gather attack intelligence: The SSG stays ahead of the curve by learning about new types of attacks and vulnerabilities. The information comes from attending conferences and workshops, monitoring attacker forums, and reading relevant publications, mailing lists and blogs. Make Sun Tzu proud by knowing your enemy; engage with the security researchers who are likely to cause you trouble. In many cases, a subscription to a commercial service provides a reasonable way of gathering basic attack intelligence. Regardless of its orgin, attack information must be made actionable and useful for software builders and testers.
SFD1.1 Build and publish security features: Some problems are best solved only once. Rather than have each project team implement all of their own security features (authentication, role management, key management, audit/log, cryptography, protocols), the SSG provides proactive guidance by building and publishing security features for other groups to use. Project teams benefit from implementations that come pre-approved by the SSG, and the SSG benefits by not having to repeatedly track down the kinds of subtle errors that often creep into security features. The SSG can identify an implementation they like and promote it as the accepted solution.
SR1.1 Create security standards: Software security requires much more than security features, but security features are part of the job as well. The SSG meets the organization's demand for security guidance by creating standards that explain the accepted way to adhere to policy and carry out specific security-centric operations.?A standard might describe how to perform authentication using J2EE or how to determine the authenticity?of a software update. (See [SFD1.1 Build and publish security features] for one case where the SSG provides a reference implementation of a security standard.) Standards can be deployed in a variety of ways. In some cases, standards and guidelines can be automated in development environments (e.g., worked into an integrated development environment). In other cases, guidelines can be explicitly linked to code examples to make them more actionable and relevant.
AA1.1 Perform security feature review: To get started with architecture analysis, center the analysis process on?a review of security features. Security-aware reviewers first identify the security features in an application (authentication, access control, use of cryptography, etc.) then study the design looking for problems that would cause these features to fail at their purpose or otherwise prove insufficient. For example, a system that was subject to privilege escalation attacks because of broken access control or a system that stored unsalted password hashes would both be identified in this kind of review. At higher levels of maturity, this activity is eclipsed by a more thorough approach to architecture analysis not centered on features. In some cases, use of the firm's secure-by-design components can streamline this process.
CR1.4 Use automated tools along with manual review: Incorporate static analysis into the code review process in order to make code review more efficient and more consistent. The automation does not replace human judgment, but it does bring definition to the review process and security expertise to reviewers who are not security experts. A firm may use an external service vendor as part of a formal code review process for software security. This service should be explicitly connected to a larger software security development lifecycle (SSDL) applied during software development, and not just "check the security box" on the path to deployment.
ST1.1 Ensure QA supports edge/boundary value condition testing: The QA team goes beyond functional testing to perform basic adversarial tests. They probe simple edge cases and boundary conditions. No attacker skills required. When QA understands the value of pushing past standard functional testing using acceptable input, they begin to move slowly toward "thinking like a bad guy." A discussion of boundary value testing leads naturally to the notion of an attacker probing the edges on purpose. What happens when you enter the wrong password over and over?
PT1.1 Use external penetration testers to find problems: Many organizations are not willing to address software security until there is unmistakable evidence that the organization is not somehow magically immune to the problem. If security has not been a priority, external penetration testers demonstrate that the organization's code needs help. Penetration testers could be brought in to break a high-profile application in order to make the point. Over time, the focus of penetration testing moves from, "I told you our stuff was broken" to a smoke test and sanity check done before shipping. External penetration testers bring a new set of eyes to the problem.
SE1.2 Ensure host and network security basics are in place: The organization provides a solid foundation for software by ensuring host and network security basics are in place. It is common for operations security teams to be responsible for duties such as patching operating systems and maintaining firewalls. Doing software security before network security is like putting on your pants before putting on your underwear.
CMVM1.2 Identify software defects found in operations monitoring and feed them back to development: Defects identified through operations monitoring are fed back to development and used to change developer behavior. The contents of production logs can be revealing (or can reveal the need for improved logging). In some cases, providing a way to enter incident triage data into an existing bug tracking system (many times making use of a special security flag) seems to work. The idea is to close the information loop and make sure security problems get fixed. In the best of cases, processes in the SSDL can be improved based on operational data.
Don't forget that we have covered only twelve of the 111 activities described in the BSIMM! Also note that as "level one" activities, the twelve activities described here are particularly straightforward and simple. The BSIMM also includes "level two" activities (more difficult than level one and require more coordination) and "level three" (rocket science).
Two new (bonus) software security activities
As an observation-based descriptive model, the BSIMM changes over time. To give you a concrete idea of what this means and to give you a taste of some killer hard activities, we describe two brand-new, recently identified activities. These two are both "rocket science," level-three activities.
Our criteria for adding an activity to the BSIMM is as follows. If we observe a candidate activity not yet in the model, we determine based on previously captured data and BSIMM mailing list queries how many firms probably carry out that activity. If the answer is multiple firms, we take a closer look at the proposed activity and figure out how it fits with the existing model. If the answer is only one firm, the candidate activity is tabled as too specialized. Furthermore, if the candidate activity is covered by the existing activities, or simply refines or bifurcates an existing activity, it is dropped.
Using the criteria above, the two activities added to the BSIMM4 model are:
CR 3.4 Automate malicious code detection: Automated code review is used to identify dangerous code written by malicious in-house developers or outsource providers. Examples of malicious code that could be targeted include: backdoors, logic bombs, time bombs, nefarious communication channels, obfuscated program logic and dynamic code injection. Although out-of-the-box automation might identify some generic malicious-looking constructs, custom rules for static analysis tools used to codify acceptable and unacceptable code patterns in the organization's codebase will quickly become a necessity. Manual code review for malicious code is a good start, but is insufficient to complete this activity.
CMVM 3.3 Simulate software crisis: The SSG simulates high-impact software security crises to ensure software incident response capabilities minimize damage. Simulations could test for the ability to identify and mitigate specific threats or, in other cases, could begin with the assumption that a critical system or service is already compromised and evaluate the organization's ability to respond. When simulations model successful attacks, an important question to consider is the time period required to clean things up. Regardless, simulations must focus on security-relevant software failure and not natural disasters or other types of emergency response drills. If the data center is burning to the ground, the SSG won't be among the first responders.
Putting the BSIMM to work for your firm
We have only scratched the surface of the BSIMM in this article with a quick overview of fourteen of the 111 BSIMM activities. Download a copy today and see which activities make the most sense for your firm. Our bet is you can start with the twelve most common activities and move on from there.
The BSIMM project continues to grow. If you are interested in joining the project, please contact the authors.