Feature

9 Ways to Improve Application Security After an Incident

When an enterprise suffers an application security incident, a whirlwind of activity takes place to triage the immediate problem. Application and security teams work side by side to identify the damage, implement a quick fix to prevent further losses, and perform a root-cause analysis to determine why the vulnerability existed in the first place.

Embarrassingly often, incident response is the first time application security is discussed in earnest between information security and application teams. While policies may be in place and risk assessments have been performed, an attacker has opened an indisputable vulnerability that has slipped through the cracks. Even penetration testing results, with their inherent proof of exploitation, do not bring the same gravitas. Experienced testers have frequently seen findings explained away by a defensive application owner, remediation scheduled into the next release that never comes, or slapdash fixes that prevent the proof-of-concept, but not the underlying vulnerability.

When the root cause analysis is laid at the feet of management, the savvy information security officer will not only have an explanation for "what happened this time", but how to identify other cases of the same vulnerability in other applications in the enterprise. Even savvier information security teams can leverage the incident as a catalyst to enhance the assessment of applications and improve an inconsistent and underdeveloped application security

    Requires Free Membership to View

program.

However, more often than not, these fledging improvements can get crushed under the inertia of the organization. It can be difficult to shift people's attention from the "quick-fix" to "fix-the-root-cause" once the initial damage has been mitigated. The complexities of implementing an application security program can frustrate even experienced practitioners and the difficulty in establishing a business case can create stall-out, due to the large costs that many of these initiatives carry.

When attempting to improve application security after an incident, consider the following nine pieces of advice to forestall some of the challenges other organizations have faced.

1. IDENTIFY HIGH-RISK APPLICATIONS
(But don't spend too much time on the initial inventory)

Most organizations already have a fairly good understanding of the applications that mean the most to the business. They're the applications that when performance or availability problems occur, everyone drops everything to fix. They're also the ones that require compliance and regulatory controls. However, many organizations also have third-party applications that provide a critical function to the business, but can be overlooked if the exercise is too introspective.

One inevitable thing that happens during an inventory exercise is that a rush of data is provided with varying levels of confidence and quality. Focus can shift away from the end result and toward the inventory practice, with elaborate schemes of identifying every last application. Limit the amount of effort on identifying applications and focus more on identifying which applications are in scope.

Determine what data or functionality is present in the applications that is worth protecting, and do some quick threat modeling to see if there is a clear path to an adverse event. In this threat model, assume that access control fails and the underlying application data and functionality is exposed to all. Critical Web application vulnerabilities often result in these outcomes, so those worst-case scenarios should be on the table.

Try to explain the risk in two sentences as if you were speaking to a layperson. A good example could be: "The PayThemNow application is used to transfer funds between the corporation and its payees. If an unauthorized transaction is entered into the system, it may be difficult or impossible to recover the funds." Another example: "The RebatePlease application collects significant amounts of customer information and makes it available to internal business units and an external check processing provider. If the data is compromised, a breach disclosure may be necessary, and if the application logic can be subverted, customers can get rebates they are not entitled to receive."

On the other hand, when you end up stringing together a good number of "ifs", then you might consider moving the application down in your priority list. Applications where only reputational damage is on the line or where internal network access is required are two common areas where organizations spend too much time attempting to create a complete inventory.

Keeping the list small and doing a strong job on a subset of your application estate will yield positive benefits for application security in the longer term, and prevent stall-out and exhaustion.

Application Assessment Metrics
There are two major classes of metrics used to measure security initiatives

1. Key Risk Indicators (KRI) which measure the risks identified by the assessment program, and;

Examples of KRIs for this program would include: number of vulnerabilities still open for each application, applications within open vulnerabilities that have suffered a successful attack within the last year, and applications with open vulnerabilities with no clear path towards remediation or where the risk has been accepted by the business unit.

2. Key Performance Indicators (KPI) which measure the quality and coverage of the program's execution.

Examples of KPIs for this program would include: number of high-risk applications, number of assessments performed, code/component coverage for each assessment, assessment coverage per business unit, number of vulnerabilities opened for each application, number of vulnerabilities addressed with a plan, and number of vulnerabilities closed or remediated.

--CORY SCOTT

2. DEVELOP A STRUCTURED, MEASURABLE AND REUSABLE APPROACH
It's going to take more than a spreadsheet.)
The meaning, scope, and depth of an application security assessment, penetration test, or vulnerability scan can significantly vary based on the audience. The fact that we have at least three different process names with different approaches should be incontrovertible evidence of this issue.

Even within consulting firms with an established methodology and deliverables, there can be a surprising amount of variation in testing depth for a given function or sub-application. Use of tools is not a panacea to this problem either, as the underlying logic used to explore a given site and identify potentially vulnerable interfaces requires human guidance. An automated tool cannot determine whether the coverage is acceptable and whether or not the necessary tests are run.

By defining a shortlist of critical vulnerabilities to test, and a flexible approach on how to perform the assessment (tools, code review, or penetration testing), a skilled application security analyst can provide the appropriate level of coverage and demonstrate that the application has been tested sufficiently. While you may allow flexibility on which approach should be used, there should be a standard and reusable testing methodology for each approach.

Each test should include a standardized deliverable template, including a common approach to rating vulnerabilities, documenting each issue and its impact to the application. It should include steps to reproduce and suggested remediation advice (perhaps supplied by central guidance) for development staff. The findings should also be in a portable format for reporting purposes.

Measuring the progress and risks identified by the assessment program is critical to maintain momentum and continue the dialogue with all stakeholders (see "Application Assessment Metrics," above).

3. CONSIDER TWO-PHASE ASSESSMENTS TO FIND DESIGN FLAWS
(Don't just scan for yesterday's exploit.)
In application security, there are two primary types of vulnerabilities. Design vulnerabilities require changes to the underlying application design or architecture, while implementation vulnerabilities are typically fixed with additional code (such as an input validation library) or modification of code in a particular function of an application. Traditional application penetration testing is focused on implementation vulnerabilities, although a skillful tester can also identify the symptoms of some design flaws.

An application security design assessment is often used for high-value applications where there are aspects of the application that could result in security issues but do not lend themselves easily to implementation testing. Use of cryptography, logging, development practices, and other design review criteria are common examples of these aspects. The test is typically performed by an application security specialist in an interview format, and includes data flow diagramming around trust boundaries, threat modeling, review of development practices, limited source code review, and design documentation review.

While a design assessment may appear to be merely a paper-based exercise, a well-executed review can find systemic flaws that result in hundreds of implementation vulnerabilities which would have taken multiple man-months to identify in a traditional penetration test engagement.

As a result, it is recommended that some applications undergo a two-phase assessment that includes a design review in addition to a penetration test.

4. DEPLOY APPLICATION SECURITY SPECIALISTS WITH BUSINESS-SPECIFIC REMIT
(Context and availability are crucial.)
Successful implementation of application security discipline requires significant alignment with application development practices of the organization and the application owners. If the enterprise has multiple development practices spread across multiple business units, it is important to deploy staff with close proximity to those teams. The application security specialist should represent the interests of the application security practice while becoming familiar with the underlying business that the application team supports.

A business-aligned application security specialist would maintain a critical watchlist of applications that require oversight and guidance, ensure that application development teams are aware of security guidelines and requirements, advocate security improvements in the application lifecycle, scope and schedule assessment activity, verify that third-parties used in the business follow application security guidelines, assist in incident response, and perform application security assessments and coordinate with other internal and external testing resources.

Where an organization cannot support a dedicated application security specialist, cross-training information security and application development staff on the ground is an acceptable alternative. A developer with interest in security combined with a general information security specialist with some technical skill can make a strong combination when it comes to assessment and remediation.

At the enterprise level, there should be an application security head that will oversee the application security specialists, encourage reuse and economies of scale and scope of the application security teams, produce metrics and measure the success of the application security program, set standards and guidelines for application security assessment and development methodologies, and work with the larger information security community internal and external to the enterprise.

5. SCOPING IS CRITICAL, SO GET EVERYONE TO THE TABLE
(Get this wrong, and you've just wasted thousands of dollars.)
One of the most difficult phases of application security assessment is scoping. When you get this phase wrong, the results are at best meaningless and at worst provide a false sense of security. In order to get accurate and risk-aligned scoping for a given test, you need the application owner, a development representative, an information security representative, and an application security specialist to work together to define the test parameters.

  • The application owner will be responsible for clearly stating the purpose of the application and how it is used. The owner is also responsible for engaging deployment and operations staff to assist the testing team.
  • The development representative can answer detailed questions about how the application is architected and what the deployment environment looks like.
  • The information security representative should give the testing team direction on what risks are relevant to the given application and what policies and guidelines the application is bound to.
  • The application security specialist should delve into the nature of the application with the source material from the other three representatives and design a test plan that meets the assurance requirements.

Questions about the use of quality assurance or pre-production environments, number of test accounts, use cases, testing restrictions, access to source code, and other pertinent issues can be addressed at this point. At the end of the scoping phase, a document should be produced by the application security specialist that outlines the scope of the assessment, including what application functionality will be tested, the test approach, requirements to start the test, and what the deliverable will look like. If the application security specialist is not performing the test, he or she will confirm the scope with the testing team.

6. ENGAGE INTERNAL AND EXTERNAL APPLICATION SECURITY ASSESSMENT TEAMS
(But ensure that the results fit the methodology and can be measured.)
After the approach has been defined, the first few engagements will set the precedent for the future of the assessment exercise. It is important to introduce a quality assurance checkpoint as the completed assessments start to come in to make sure the approach is being followed and where changes may need to be made in either the methodology or the assessment team.

As a result, it is recommended that the assessment program does not engage in too many concurrent assessments at the beginning. Instead, attempt to collect assessments from a select group of well-established providers with different types of applications and deployment environments to determine how effective the methodology and metrics truly are. If you can find "security friendly" application teams in your organization, you might want to start with them first.

7. OBTAIN FUNDING
(Not just for assessment, but also for remediation, in the same bite.)
Security programs that only attempt to see how deep or wide the problem is without also attempting to correct the issue often find it difficult to gain acceptance among stakeholders outside of the security program. There is also an advantage of "striking while the iron is hot." If an incident has raised awareness about application security, interest may cool while you are waiting for assessment results.

A good rule of thumb is that remediation work will cost at least as much as the assessment work. By obtaining remediation funding up front, it may also be used as an incentive for application owners who fear yet another assessment without any means or budget to remediate issues that are found. Depending on the level of acceptance of uncertainty in the budgeting process, it may be advantageous to set a "not-to-exceed" remediation cost for each application. Where application fixes would exceed that amount, a separate funding case could be put forth.

Remediation funding can include short-term as well as longer-term solutions. Examples of short-term solutions include temporary deployments of Web application firewalls or other filters and development and documentation for input validation libraries. Longer-term solutions may include upgrades for legacy application frameworks and additional consulting or development resources to remediate issues.

8. REVIEW DEVELOPMENT LIFECYCLE AND VENDOR MANAGEMENT PROCESSES
(This brings lasting change.)
No one starts out building an application with the thought that it should lack sufficient security. Instead, the application builder uses the toolset and patterns that they are familiar with and are instructed to use by policy and process.

The problem is often that existing processes and controls do not sufficiently take security requirements into account or are flexible enough for application models with varying degrees of complexity or risk. By reviewing development practices and testing requirements, an application security specialist can look for opportunities to include security requirements during the design phase of the application and testing practices during the acceptance and implementation phases.

Two application security maturity models, OpenSAMM http://www.opensamm.org/ and BSIMM http://www.bsi-mm.com/, have been recently released to help perform an assessment of your existing development lifecycle and build a roadmap of where you need to go. OpenSAMM is particularly useful in determining levels of effort required for program improvements, and the BSIMM model is built from real application security program experiences which can be helpful in demonstrating that other companies are doing the same type of thing.

Also, if you looked at Microsoft's Secure Development Lifecycle (SDL) in the past and found it too detailed or focused on product development, look again at their SDL Optimization Model http://msdn.microsoft.com/en-us/security/dd221356.aspx, which is a streamlined version of the first revision with a useful self-assessment guide. Both SDL models are aimed at development organizations and you may find discussion on risk management and assurance lacking.

In the case of third-party applications, alignment with vendor management and procurement practices can yield positive results. By performing a risk assessment prior to contract approval (or even during vendor selection), the application security team can provide the necessary security criteria and assurance requirements.

9. DON'T FORGET ABOUT DETECTION AND RESPONSE CAPABILITY
(Breaches will happen again, and the cost will be higher if you're unprepared.)
Unfortunately, despite the best efforts made by application and security teams alike, breaches will continue to occur. However, the impact of any given breach can be reduced if there is an adequate audit trail of application activity and a skilled responder that can assist the application team in forensics and root cause analysis.

Some institutions have implemented Web application firewalls in monitoring mode only or leveraged other types of monitoring technologies to keep track of external untrusted access to Web applications over the Internet. However, it is always best to have the application generate meaningful log entries that can be used to re-create an attacker's interaction with the application. There are frameworks available that outline security logging requirements that should be evaluated during application design.

Another role of the application security specialist can include assistance during an incident, including determining the original application flaw used during the breach, recommendations on immediate fixes to prevent further exploitation, and review of log and audit trail activity.

Application security is a tough problem to tackle during times of relative calm, but when an incident takes place, both opportunities and challenges arise. Establishing an assessment program, putting together appropriate metrics, addressing development lifecycle issues, and putting specialists in place can have a lasting impact on the enterprise and help reduce the frequency and cost of breaches in the future.

Cory Scott is a director at Matasano Security, an independent security research and development firm that works with vendors and enterprises to pinpoint and eradicate security flaws, using penetration testing, reverse engineering, and source code review. Prior to joining Matasano, he was the Vice President of Technical Security Assessment at ABN AMRO / Royal Bank of Scotland. He also has held technical management positions at @stake and Symantec. He has presented at Black Hat Briefings, USENIX, and SANS, and leads the local Chicago OWASP chapter.

Send comments on this article to feedback@infosecuritymag.com.

This was first published in October 2009

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: