Web scanning and reporting best practices

Implementing a solid Web scanning routine is a key way to avoid corporate Web application attacks. And with industry requirements such as PCI DSS, performing vulnerability scans are also required to stay compliant. In this tip, contributor Joel Dubin discusses what goes into a Web scan, what should be scanned and how the results can be interpreted.

As Web application attacks become more common, the need for tools to check Web sites for vulnerabilities grows.

The days of picking at Web sites by hand and checking for common hacks are gone. Instead automated testing tools generate reports for management to review and developers to use as guidelines to fix security bugs.

Web scanning has become part of the testing routine used for catching other bugs in the software development life cycle. And since Web security has become a part of industry requirements like the Payment Card Industry Data Security Standard (PCI DSS), scanning for vulnerabilities is no longer a luxury; it's now a compliance mandate.

This tip discusses some best practices for using scanning tools and reporting their results, including what goes into a scan, what should be scanned and how to interpret the results.

There are three elements in a successful scanning program: defining the scope and purpose of the scan, picking the right scanning tool for the job and assembling a usable and readable report. The last thing you want, even if the site is full of holes, is an incomprehensible hundred-page document nobody can read or understand. It's important that the scan report can be translated into corrective action developers can take before the site goes into production and is exposed to nastiness in the wild.

Reasons for website scanning
First, define the scope and purpose of the scan. Is it for compliance with government regulations or industry guidelines like PCI, or is it to identify the causes of specific problems? Is it in response to an incident or attack, or something a corporation wants to do routinely as part of its software development life cycle to harden sites before they're live?

If the scan is for compliance, it can focus on just regulatory requirements. Section 6.5 of PCI, for example, requires testing for the top ten vulnerabilities listed by the Open Web Application Security Project (OWASP). This is an excellent starting point and covers the vast majority of Web hacking attacks.

But if tests are a routine part of a company's software development life cycle, it's a good idea to run a broader scan. Ideally, corporate scanning should be wrapped around a company's IT security policy. Some policies may mandate two-factor authentication for high-risk transactional sites or password policies not listed in OWASP that should also be tested.

Remember, compliance pleases auditors and regulators, but there's more to security than checking off lists.

Choosing a website scanning tool
Next, choose the right tool for the job. The ideal choice would be one that's easy to use and set up, meshes with the network without bringing it down (poorly configured scanning tools have a way of doing that) and generates customizable reports. Scanners should also be able to simulate real-life attack scenarios, not just hacks that developers might dream up by themselves.

After all, contemporary Web hacking has become sophisticated, moving from brute force attacks against logon screens to cross-site scripting (XSS) and now to hacks against Ajax and Web 2.0 technologies.

Some of the better tools on the market are AppScan from WatchFire Corp., WebInspect 7.0 from Hewlett-Packard Co.'s SPI Dynamics group, Web Vulnerability Scanner Enterprise from Acunetix Ltd. and Hailstorm Enterprise Application Risk Controller (ARC) from Cenzic Inc. Each tool has different strengths and weaknesses. Some are better at spotting vulnerabilities from JavaScript, others at injection attacks – by SQL and XSS – and others at Ajax exploits.

Because most websites using Ajax and Web 2.0 are written in more than one programming language, it might be a good idea, if it's in the budget, to use two scanning tools and compare the results. Besides commercial scanning tools, there are free tools like Nikto, N-Stalker Web Application Scanner and Burp Suite. These tools can also be used in conjunction with commercial tools. They should be used more to augment commercial tools, rather than for standalone testing, since they don't have all the features of commercial products.

Scanning tests should be conducted during the software testing phase -- after development but prior to production. This gives time to identify and resolve security bugs before going live. Test beds should be on segregated network segments to prevent the scans from "attacking" an enterprise network outside the website.

For more information:
In this Q&A, Michael Cobb examines whether data anonymization ensure the privacy of Web application user data.

Security expert Joel Dubin offers advice on how an enterprise can comply with PCI DSS Section 6 requirements.

Michael Cobb examines how a Web application vulnerability scanner can be a valuable part of an enterprise's development strategy.

And, of course, scans should be run in isolation -- not concurrent with stress, performance or other tests -- and in off-peak hours, like the middle of the night or on weekends. Testers should scan from pre-approved workstations or servers and should go through appropriate change control procedures to register their activity with the IT department. If a scan takes the network down, at least the networking team will know the cause and not panic over an imaginary intrusion.

Also, don't rely on the scanning tool alone. Any vulnerabilities found should be followed up by manual tests. This means the tester should try to break into the site using an exploit based on that vulnerability, but different from the script stored in the scanning tool. Toolkits like Metasploit offer repositories of canned exploits testers can use.

Scans should only be ordered when new functionality is added to a website. If you're deploying a new web application with fresh code, scan it. But if marketing wants to change the logo or color of the site, or rearrange graphics on the home page, don't bother; such changes are highly unlikely to introduce new vulnerabilities into the website.

What to look for in Web scanning reports
Finally, after corporate sites have been tested and vulnerabilities have been discovered, testers need to put together a readable report. Scanning tools generate reports, but testers should consider plugging results into a custom template. When using more than one scanning tool, which may have different reporting formats, or following up an automated scan with manual testing, which has no report format at all, a custom template is the only way to create a single report in a unified format.

The report should start with an executive summary containing a table with the five most serious vulnerabilities uncovered by the test. The vulnerabilities should be ranked from the most serious on down and should be assigned a risk level, such as high, medium or low. Such rankings provide an action plan for developers to prioritize what to fix first, or if low risk, to drop altogether until the next release cycle.

Another way to jazz up a report for readability is to have a series of colored icons representing risk levels. A red skull and crossbones next to a high-risk finding will catch the attention of even the most technology-challenged manager.

After the executive summary, all vulnerabilities should be listed in sequence with a brief description, along with possible threats and a small snippet of the offending code. Try to keep each vulnerability description to one page. If developers need more data or code to chew on, provide it later in a separate report.

The key to a successful Web scanning program is consistency from one scan to the next. Make sure the scope, the tools and the reports are in line with your business and IT needs. Inconsistent results based on changing parameters will only keep developers from fixing vulnerabilities. And, that isn't good for your Web security.

About the author:
Joel Dubin, CISSP, is an independent computer security consultant. He is a Microsoft MVP, specializing in web and application security, and is the author of The Little Black Book of Computer Security available from Amazon. He is the host of a regular radio show on computer security on WIIT in Chicago and runs The IT Security Guy blog at http://www.theitsecurityguy.com.
 

This was first published in March 2008

Dig deeper on Web Application Security

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

1 comment

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

SearchCloudSecurity

SearchNetworking

SearchCIO

SearchConsumerization

SearchEnterpriseDesktop

SearchCloudComputing

ComputerWeekly

Close