Secure from the Start

SOFTWARE DEVELOPMENT Get off on the right foot by working with your developers to ingrain security in the coding process.

This article can also be found in the Premium Editorial Download: Information Security magazine: Captive to SOX compliance? A compliance guide for managers:

Get off on the right foot by...
...working with
your developers to ingrain security in the coding process.

More information from SearchSecurity.com

Register for a live webcast on tools for securing the software development lifecycle with contributor and SystemExperts consultant Michael Jordan.

Download Chapter 5 from Gary McGraw's Software Security: Building Security In.

Visit our resource center for tips and expert advice on security software development.

Features & functions--not security--have been top-of-mind for most companies in application development. But the price for that oversight can be steep: exposed confidential data, stolen customer account information and multiple vulnerabilities.

Many of these problems can be avoided by planning for security at the starting gate--in software development. Architecture, design and coding offer opportunities to make applications and services more secure.

By taking security into account throughout the development cycle--defining security requirements, classifying data, coding securely and conducting thorough testing--you can start off on the right foot with secure applications.

So, on your mark, get set, code!

Requirements and Specifications
Software development life cycles (SDLCs)--the game plans of software development projects--come in many forms: Waterfall, Iterative Development, Prototyping and Spiral, just to name a few. Each has its own strengths, weaknesses and specific phases, but they have a step in common: defining software requirements--including security--so everyone understands what needs to be built.

Software requirement specifications communicate the software's required performance and security features to the entire team. They can range from short, ad-hoc instructions to very detailed, formal documents. The further you are from the developer--organizationally or geographically--the more detailed you need the specification to be. For example, clearly stating the level of authentication for each interface (Web, command level, API) will prevent developers from making their own assumptions about the level of trust required for the application environment. Clear statements of the functional and security requirements of a project are absolute necessities for outsourced projects.

Data Classification
After defining software requirements, the next step is classifying the data. While not normally set out as a separate SDLC, data classification is important in focusing security efforts, and can help you avoid costly errors early in the project. By reviewing the data that will be collected or handled by the application and assigning a classification based on its value to the organization's sensitivity and legal requirements, you can ensure security right up front. For instance, a company might not care who downloads its product catalog, but it may not want its preferred customer price list being published.

In general, it's best to take the simple approach to data classification. As the number of classification categories increases, their management gets more difficult: You must constantly check--with each software release--whether information is in the correct category.

For example, HIPAA defines personal healthcare information as data that must be protected, thus categories containing personal and non-personal healthcare information must be properly categorized to comply with the law.

In other circumstances, organizations might choose to have four categories: Information valued as "high," which, if divulged, could cause embarrassingly bad publicity, is assigned a password; "medium" applies to a preferred customer price list; "low" includes a standard price list and white papers; and "public" might be marketing material and the Web site itself.

Avoiding Coding Errors Out of the Block
There are many classes of common coding errors that can lead to security vulnerabilities. Here are three major problem areas and ways to avoid them.
  • Injection flaws allow an attacker to get access to other components--like the database server--through your application code. These flaws can include attacks that get your application to run code or scripts in order to access a backend database through an improperly protected interface.


  • A common example is a SQL injection flaw, which allows the attacker to input data that causes the application to execute SQL statements and provides privileged information to the attacker. These flaws commonly stem from poorly protected SQL statements and input that includes quote marks or escape characters, resulting in SQL statements the developer did not intend.

    TIP: To protect against these vulnerabilities, do not build SQL requests by simple string catenation; instead, use argument substitution interfaces or stored procedures to access the database. You can also disallow, quote or escape these special characters in the user input.

    In the first week of 2006, almost 10 percent of the reported vulnerabilities in the National Vulnerability Database were SQL injection flaws.

  • Improper error handling can leave gaping holes in applications. Too often, developers do not properly manage error conditions.


  • TIP: Don't reveal to the user any details of what has gone wrong. This information could be used by an advanced hacker to deduce details of the application and further the attack. For example, if the combination of user name and password is invalid, don't tell the user which one is wrong; tell them that the combination is invalid.

    Also, build the application code to log error conditions. Have the application log each user login attempt. Successful logins provide useful forensic evidence, and failed ones can alert you to an attack. Never log the clear text passwords.

  • Buffer overflows are possible in any programming language. In Java and C#, they can lead to a denial of service attack, and in C and C++, they can result in information loss or malicious code insertion.


  • TIP: There are static and dynamic (runtime) analyzers that can help you find buffer overflow errors.

    Never assume you have the needed buffer space needed. And if it looks like the user is passing bad values, log this as an error. Avoid the C runtime functions that are well known to facilitate buffer overflow errors--strcpy, strcat, sprintf, vsprintf and gets.

--Michael Jordan

Architecture and Design
Code architecture and design can affect application security in much the same way a building's architecture affects its physical security: Sliding windows within easy reach pose physical security threats, just as coding holes can open the door to attacks.

Any application architecture decision must consider the information handled by the application and how the integrity or safety of that information might be threatened by architectural choices. Choosing to store customer information in a database secured only for public information is an example of a poor decision. On the other hand, a decision to use a proven third-party identity management service instead of building new authentication stores into the application can improve application security because such systems can be tricky to develop.

Likewise, the design of a software product should focus on components that most likely affect security, like input validation, authentication, authorization, error handling and logging. If third-party commercial or open-source software products are used as components, they need the same scrutiny as the application code to ensure the services they provide do not weaken your application's security.

It's important to effectively communicate software design decisions and requirements to your team to ensure everyone is on the same page and that security is properly implemented. Effective communication and team building can go a long way toward avoiding mistakes in which a developer or tester makes a bad decision based on incorrect assumptions--and leaves code open to attack.

Coding Securely
The development and coding stage, where the requirements, architecture and design come together, is critical. Developers need to understand application security threats, and must be aware of common coding errors, such as buffer overflows, injection flaws and invalidated input, that can lead to security vulnerabilities. There is no substitute for security experience in the development process. That goes for everyone involved--the architects, designers, testers and managers--not just the developers.

If a team doesn't have experience with security, develop it. Have developers read books such as Writing Secure Code by Michael Howard and David LeBlanc, and The Open Web Application Security Project (OWASP) Guide to Building Secure Web Applications, or have them earn development-oriented security certifications such as Microsoft's MCSE: Security and Sun Microsystem's SCD/WS.

On-the-job training can also help. When there's a security incident with an application, get your developers involved in incident response activities. Hold open post-mortem meetings after these incidents are resolved to share details with the staff, and have staff participate in internal and external security code reviews.

Passing the Baton: SOA Hurdles By Jonathan Gossels
Service Oriented Architecture (SOA) promises reduced development cost and faster time to market, primarily through code reuse. However, securing an SOA environment can be challenging.

Generally, SOA means an infrastructure characterized by the following:
  • Service virtualization--a reusable set of code with well-defined interfaces that performs a well-recognized business function.
  • Service reuse--where applications draw the bulk of their functionality from a catalog of preexisting services.
  • Service brokering--services register their interfaces with a broker so that they are easily accessible by other applications.
The SOA approach of producing general-purpose services can conflict with application-specific security requirements. In a traditional application environment, sensitive data is protected across all networks and systems that it traversed during processing. There is mutual authentication of principals and enforcement of authorization levels, and audit trails and logging are part of the infrastructure. In an SOA environment, implementing those same controls is difficult.

Another challenge is the way SOAs typically rely on a brokering mechanism that enables services to publicize their service contracts and other descriptive information in a catalog or shared repository. If you are going to run sensitive applications in an SOA, there must be a formal process for reviewing the security of new services and a structured change control process for adding services.

Authentication can also be problematic because, in many default implementations of SOA, no authentication is performed. Even if a developer enables Web services security, he still must determine what authentication means in the loosely coupled SOA environment.

Another sticking point is the lack of end-to-end security. In larger SOAs, software infrastructure is used to create a bus processing model that aids in dynamically connecting, mediating and controlling services and their interactions. The beauty--and danger--of this model is that each component in the chain is unaware of the processing that occurs in the other components.

Jonathan Gossels is president of SystemsExperts.

Also, it's important to be aware that some languages are more susceptible to certain errors. C is prone to buffer overflow errors, improperly terminated strings and memory leaks. PERL can succumb to installation and scoping problems. And PHP--a scripting language used to make dynamic Web pages--is prone to unsafe configuration; many third-party reusable PHP components contain vulnerabilities, making PHP one of the most commonly cited languages in the National Vulnerability Database.

Java also is susceptible to buffer overflows, but the ramifications aren't as severe. If a C programmer makes a buffer overflow error, an attacker can exploit it to insert code to the application. If a Java developer makes a similar mistake, an attacker might be able to crash the application but can't inject code into it.

There are tools that can help with secure coding, depending on the language. For C, the Safe Strings tools libraries can isolate buffer overflows. Tag libraries, such as Struts JSP tag library and Eclipse with a source code analyzer like FindBugs, can be used for Java.

In addition, applications can be developed to be self-testing or self-correcting. Apache, for instance, will not use security certificates that are not properly protected in the file system.

Code Reviews
The simplest and sometimes most likely to be missed coding mistakes are often caught while reviewing the code. (It's important to make this a separate and distinct activity from activities such as personal code reviews in which a developer reviews his or her own code before checking it into the source control system.)

Security code reviews should examine a variety of security issues, and can range from simple to formal. You could set up a "code buddies" system where developers read each other's code; or a "code reading" system in which your developers present their code to a larger part of the development team--which gets the original developer thinking about the rationale and justification for each block of code.

Also, by conducting code reviews both internally and externally (through the use of reviewers from other corporate projects or departments, or a third-party reviewer), you have a higher chance of catching errors up front.

Regardless of your chosen review method, there are certain steps you must follow to ensure secure code:

  • Appoint a secretary to note comments and defects discussed during the review.


  • Review the common errors that have been seen in similar code.


  • Have the code buddy or developer present the code.


  • For each basic block, function or method, consider common errors: Is all input validated? Are all error cases handled? Is proper access control in place? Are buffers (and strings in C) protected from overflow? Is sensitive data stored securely? Are temporary variables containing secrets like passwords cleared after use?


  • Enter all defects in the bug database.


  • Update the common error list for the next code review.

Integration and Testing
Security testing, along with functionality, reliability and performance testing, can tell you how close you are to being ready to move your application to production. Starting this testing early and overlapping it with development as much as possible will get you early results, much like exit polls on Election Day. Evaluate defects and test results on a regular basis, then feed what you learn back into the development and the code review process.

Penetration testing and application vulnerability testing are critical in this phase. As you prepare the application for production, scan the network and systems that will be used with all application code and make sure they are as operational as possible. Ensure there are no extraneous open ports or services being offered before you consider the testing complete. Have testers try to break the application by passing bad data or by manufacturing mischievous data, then fix any vulnerabilities before the rollout.

Before moving the application code into the production environment, conduct port scans and repeat the penetration testing to ensure the environment is correctly configured before production goes lives.

Production and Maintenance
In the maintenance stage, poor code hygiene is magnified, and defects can be easily introduced. Oftentimes managers put entry-level developers on maintenance, which can create problems. If you choose to do this, make sure a more experienced developer mentors the novices and reviews their code.

Coding standards--including naming, language construct use and comment standards--can help prevent misunderstandings and the introduction of defects. A developer trying to add features or fix a defect can be sorely misled by poorly constructed comments in the source code.

Also, application developers can help avoid maintenance problems by thinking ahead of the game: Would another developer two years from now be able to understand the reasoning behind the coding decisions? Do the variable and method/function names make sense? Are there comments to explain trickier parts of the code--better yet, all of the code?

One strategy is to make developers responsible for their own maintenance for a certain period of time. If they write code, they have to fix defects in it for the next two years; if they don't produce secure code, they risk that 2 a.m. phone call when there's an incident. Oftentimes, that alone can be an effective incentive for your programmers to code securely.

Taking steps to ensure security during the maintenance of an application and the entire software development life cycle requires time and effort. But it's well worth the early investment. Once software is developed, it's much harder and more expensive to go back and add security. By then, you may find yourself in the middle of a breach that lands your company in hot water.

Planning for security at the starting gate will pay off with fewer security headaches down the road.

This was first published in March 2006

Dig deeper on Software Development Methodology

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchCloudSecurity

SearchNetworking

SearchCIO

SearchConsumerization

SearchEnterpriseDesktop

SearchCloudComputing

ComputerWeekly

Close