Kelly White & Yong-Gon Chon
Published: 01 Jan 2003
You're feeling pretty good about the security of your Internet-facing infrastructure. You've been diligent about vulnerability assessments and follow-up remediation to close the holes. Your last scan, using a commercial VA scanner or freeware, such as Nessus, revealed no known vulnerabilities. The only two IP addresses visible externally are your mail gateway and the load balancer for your Web servers.
Then you start thinking about the corporate sales and procurement applications that reside behind ports 80 (HTTP) and 443 (SSL). VA scanners won't touch the possible security holes in these apps--and they almost surely have them. So, what to do? One course is to make use of a relatively new class of tools, Web application scanners, which are designed to find those holes.
There are only a handful of products in this space. Information Security put two of them, Sanctum's AppScan and SPI Dynamics' WebInspect, through a demanding and broad series of tests to see if they perform as advertised. A third company, Kavado, which makes ScanDo, declined to participate in the comparison.
New tools are emerging to address a new infosec paradigm. At its simplest level, securing an Internet connection to a Web server is old news to most organizations--firewalls and IDSes filter traffic and watch for known aggressors. But business is anything but simple in the e-commerce era. Internet-based business opens up an organization's back-end assets to new attacks at the application level.
Insecure Web application code, whether custom-built or COTS, includes vulnerabilities that VA scanners aren't designed to detect. These vulnerabilities expose organizations to exploits that traditional firewalls and IDSes aren't designed to protect against. For example, improper input handling can result in exposure to risk of authorized access through cross-site scripting (XSS), unauthorized data and system access through SQL injection, or system compromise or denial of service through buffer overflows. In a perfect world, applications would come online with built-in security, well-tested and virtually free of vulnerabilities. If commercial software companies and organizations creating custom applications make a real commitment to producing secure code, we may actually be able to move closer to that goal ( see "Testing for Failure").
That's where Web app scanners come in. WebInspect and AppScan are designed to identify vulnerabilities in Web platforms, such as IIS, Apache and WebLogic, as well as in individual Web applications.
Getting Behind the Wheel
Both vendors provide rich graphical interfaces for configuring and executing scans. Scan setup involves specifying the target application, configuring the test policy and exploring the application. Once these steps have been completed, a mouse click starts the security test running.
Configuration. WebInspect policy configuration is very granular, allowing the user to select specific tests to be executed. For example, the user can configure a policy to include only Microsoft IIS 4 security tests.
AppScan policy configuration is restricted to the group level. Rather than specifying Microsoft IIS 4 security tests, the user would have to select the Third Party Misconfigurations and Known Vulnerabilities test groups. This could result in more tests being executed than desired, with longer run times.
Application Exploration. Both scanners provide two exploration options--automated and manual--to further define the scope of the scan. In automated exploration mode, the scanners crawl the target application, following every link provided on the site. The scanners stay within the bounds of the given domain, but more than one domain can be included. Predetermined parameter name/value pairs are used to populate form fields as they are encountered.
In manual exploration mode, the user steps through the application, exercising functionality to be included in the scan--e.g., entering personal information, conducting searches, completing transactions, etc. Both scanners proxy and record the user session. In this way, the user can be sure the scanner will test exactly what he wants tested, in the way he wants it tested.
We ran scans in both automated and manual exploration mode against each of our test applications.
- Testing. The security test begins after exploration is complete. Each scanner maintains a database of vulnerabilities and algorithms for generating tests at runtime based on application structure, content and architecture. For example, tests executed against an Apache-based application will be different from tests executed against an IIS-based site. Even tests against sites running on the same platform can vary significantly because of differences in application content and logic.
The scanners assess a target application by constructing HTTP or HTTPS requests that are known to elicit a response indicating susceptibility to various types of attack. For example, a test to see if a specific form input value is vulnerable to cross-site scripting might look something like this:
POST /bank/search.aspx HTTP/1.0<br> Host: www.acme-hackme.com<br> ...<br> searchterms=<script>alert('xss')</script> "/%20<script>alert('css')</script>%20.shtml"
If the response contains the submitted "searchterms" parameter value in the exact format it was submitted, then the input value is vulnerable to cross-site scripting.
The security testing phase requires no intervention. AppScan provides scan progress information for each testing category, but doesn't allow users to view vulnerability information until the scan is complete. WebInspect provides vulnerability details as the vulnerabilities are discovered.
Once a scan is complete, both tools allow the user to review the findings within the application, or export results in a variety of formats.
How We Tested
We tested AppScan and WebInspect against 10 different Web applications, hosted on a variety of platforms (see Figure 2): IIS, Apache, Tomcat/Jakarta, Java Server Web Developers Kit (JSWDK), Lotus Domino and PHP XOOPS. To simulate a real-world environment, the applications in our lab test ranged from hardened, production-ready apps to unpatched test apps.
The tests measured AppScan and WebInspect on their ability to detect flaws and their performance as enterprise-class products. For each test session, the scanners were configured to run their full battery of tests.
The recommended system configuration for AppScan is Windows 2000 SP2 with 512 MB of RAM. WebInspect runs on Windows NT 4.0 SP 6a or higher, Windows 98 and Windows 2000, with 256 MB of RAM. We ran our tests on a laptop running Microsoft Windows 2000 SP2 with an Intel 1 GHz processor and 512 MB of RAM.
We analyzed and graded the tools on the following criteria:
Platform Vulnerability Identification: Effectiveness in identifying security issues in Web and application servers such as IIS, Apache, Domino and Jakarta. Vulnerabilities in this category include, for example, Microsoft IIS .printer buffer overflow, Apache Web server chunked encoding and JSP source disclosure.
Application Logic Vulnerability Identification: Ability to identify security issues in application logic, such as cross-site scripting, buffer overflow, SQL injection and error handling.
- Performance: Scan speed and system resource consumption.
- Automation: Product scan automation features.
- Reporting: Reporting features and information quality.
- Extensibility: Support for custom vulnerability checks and analysis logic.
Platform Vulnerability Identification
We analyzed scan results for false negatives and false positives and tallied the number of vulnerabilities correctly identified (see Figure 3). We registered a false negative when one scanner failed to detect a vulnerability correctly identified by the other. A false positive was registered when a scanner incorrectly recorded a vulnerability that doesn't exist within a target application. After each scan, we manually verified each vulnerability reported.
AppScan did a better job overall, correctly identifying 14 vulnerabilities versus WebInspect's 11--with a significant caveat. Sanctum claims AppScan's false positive/false negative rate is less than 1 percent. Our tests showed otherwise. AppScan reported 18 false positives--more than the actual vulnerabilities it discovered. These included IIS cross-site scripting vulnerabilities reported in Java Web Server Developer Kit (JSWDK) and a Macromedia JRun source code disclosure in an IIS C# application. In fact, AppScan reported a SalesLogix eViewer denial-of-service vulnerability when SalesLogix was nowhere in sight. AppScan had four false negatives.
WebInpsect missed a few key security issues correctly identified by AppScan--a total of seven false negatives. These included IIS double decode, IIS cross-site scripting, IIS poison null byte attack and JSP source code disclosure. On the plus side, WebInspect reported only five false positives.
The lesson here is that neither product offers anything close to bulletproof testing. In light of the high number of false positives and false negatives, users should confirm the scan results through additional testing, such as manual functional security testing.
Application Logic Vulnerability Identification
We measured application logic vulnerability identification effectiveness by analyzing scan results for false negatives and false positives, and tallying the number of vulnerabilities correctly identified (see Figure 4). We divided the results into five categories:
- SQL injection: Modification of application SQL code through manipulation of application data input. The impact of a SQL injection vulnerability can range from unauthorized data access to database server compromise.
- Buffer overflow: The outcome of inserting more data into a segment of memory than the application is expecting. An attacker can identify potential buffer-overflow vulnerabilities by submitting long input values to the application for processing. If an attacker can overflow an unchecked buffer, he may be able to manipulate an application to run arbitrary shell code, resulting in denial of service and/or system compromise.
- File guessing: Submitting requests for specific files that may or may not exist on the Web server. Example files include global.asa, global. asa.bak, admin.cfg, test.htm and many others. Identification of a hidden administrative interface or retrieval of a password file could lead to unauthorized application access or disclosure of confidential information.
- Suspicious contents: Analysis of application response information for security-relevant information, such as runtime error messages, source code, passwords and database connection strings.
AppScan blew away WebInspect on cross-site scripting vulnerabilities, correctly identifying 49 to WebInspect's 28, with nary a false positive and five false negatives. On the other hand, all 10 of AppScan's buffer-overflow reports were false positives, as were six of the 11 SQL injection points identified in the target applications.
AppScan checks for buffer-overflow vulnerabilities by submitting parameters with large blocks of data. Any response with any type of error message will trigger a buffer-overflow finding.
WebInspect yielded no false positives in any of the categories. But it found only three SQL injection vulnerabilities in addition to its poorer showing on cross-site scripting.
The good news on both tools is that their file-guessing and suspicious contents features provided rich information on weaknesses that could be exploited by an attacker, including discovery of a hidden password file in a production application.
AppScan's performance is a mixed bag, flying through scans but running into a wall on large applications. Its architecture is tuned for speed, handling most scan data in volatile memory, rather than relying on disk. As a result, it was extremely fast scanning smaller applications. AppScan executed tests at a rate of nearly 500 per minute for sites in which fewer than 3,000 tests were executed.
On the other hand, testing of the two largest applications was much slower. In fact, AppScan failed to complete the test of the largest application, crawling at a rate of 2.9 tests per minute. We finally terminated it, rather than waiting 500 hours for it to finish the 96,209 tests it had scheduled.
An AppScan session will use all available CPU resources on a 1 GHz processor and consume a significant amount of memory. By necessity, it has features to notify you when it is running low on virtual memory. AppScan consumed more than 100 MB of RAM testing the smaller applications, and 210 MB for the largest.
Sanctum is aware of this limitation, recommending that customers break up scans of large sites to improve performance.
AppScan is capable of executing multiple scans concurrently. However, this may not be practical, given the application's voracious resource consumption. Our tests showed that concurrent scanning significantly slows AppScan's performance.
WebInspect's performance was considerably slower than AppScan in scanning small applications.
For example, AppScan completed scanning the mod Perl production-grade application running on Red Hat Linux 7.3 in 36 minutes. WebInspect ran the same scan in 75 minutes. AppScan completed testing of WebInspect's own demo application in five minutes. WebInspect took 74 minutes to scan the same application.
However, WebInspect was able to run its full suite of tests against the largest application in the environment. The actual test rate couldn't be measured, because WebInspect doesn't reveal the number of tests executed during a given session. Memory usage was between 60 MB and 70 MB for all scans, much less than AppScan.
Given AppScan's failure to scan one of our production-grade test applications, we recommend potential buyers run a full scan of their applications with both WebInspect and AppScan prior to making their purchasing decision.
AppScan provides for scheduled automated application scanning. This is done by creating a scan session and defining what Sanctum refers to as a "business process," which is an XML recording of a user's interaction with the application. The scan session can then be scheduled for future execution. The scan can be modified by recreating it or by editing the XML business process file directly.
AppScan handles the potential problem of session timeouts by automatically flagging session-specific values, such as cookies, as "transient." This ensures that expired session values won't abort the scan. AppScan refreshes all these values before executing tests. Users can also define their own transient values for a given scan.
WebInspect's automation is more limited. Like AppScan, it has a scheduler for periodic execution of a predefined scan. However, saved scans can't be modified to the same extent as AppScan. For example, there's no way to modify input parameters or URLs. And, unlike AppScan, WebInspect doesn't refresh session-specific values before test execution, restricting the ability to automate scanning of sites that use transient values for session management.
AppScan reports are highly customizable. In addition to a severity rating (low, medium, high), each finding is assigned a success factor. The success factor can be one of four values: not vulnerable, suspicious, highly suspicious and vulnerable. This value can be modified by the user and used in resolving false positives. AppScan also allows users to add custom comments to each finding, a useful feature for tracking vulnerability resolution.
AppScan reports offer little in the way of vulnerability explanations, pointing the user to other sources, such as CERT, SecurityFocus, CVE and SecuriTeam.com for more detailed information.
WebInspect, on the other hand, provides detailed scan reports, which contain rich exploit information. As with AppScan, WebInspect can exclude false positives from vulnerability reports. It also has trend-reporting capabilities, making it easy to gauge progress since previous scans.
AppScan's support for adding custom checks is limited to three categories. Users can:
- Add URL-based checks, for example, for cross-site scripting vulnerability testing.
- Manipulate parameter values. This could be used, for example, to set a valid application parameter such as "pageID" to a value of "%20."
- Define new parameter/value pairs to be submitted to the application. Custom checks are added through a set of user input forms.
WebInspect provides an easy-to use programming interface for adding custom security checks and analysis logic. All WebInspect APIs and objects are made available to the user. AppScan limits the checks that can be added.
WebInspect facilitates writing custom checks by providing the code necessary to tie into an existing session and the objects needed to get rolling. For example, a simple check could report every e-mail address found in a site. A more advanced check might attempt to guess application username and password combinations based on information gleaned from the site. Web-Inspect contains a Visual Basic-like Integrated Development Environment (IDE) for coding new functionality.
Both Sanctum and SPI Dynamics offer flexible pricing. AppScan's standard annual subscription license is $15,000 per user for unlimited scans of any IP/domain owned by the customer. Single-use audit pricing is $3,000 per user for 30 days.
There are two main licensing options for WebInspect. The first is a perpetual license, priced per server or "device" being scanned. This license has no limitations on number of users or scans. The cost starts at $5,000 per server for one to four servers, with lower rates for larger numbers of servers. Annual maintenance and support is an additional 20 percent.
WebInspect also offers an annual audit license, which allows two users to audit any number of servers for one year. However, there are limits to the number of scans that can be performed on any one server. The annual cost is $20,000 for two users and $5,000 for every additional user. All maintenance and support is included.
Though testing revealed flaws in both products, AppScan gets the overall nod over WebInspect for its ability to identify platform and, in particular, application vulnerabilities. However, AppScan's high number of false positives is disturbing--the kind of thing that makes sysadmins grit their teeth. WebInspect has to do a better job detecting vulnerabilities to move up in class.
AppScan's reliance on large chunks of memory and the resulting failure to handle our largest test application should also give potential buyers pause. For some organizations, splitting large scans into bite-sized chunks may be an acceptable concession in exchange for AppScan's speed and superior detection capability, but others may find it more of a burden than it's worth.
Though slower, WebInspect was able to successfully complete scans of all applications. The question is what those scans yield. Although there were few false positives, WebInspect missed a lot.
WebInspect's extensive support for customization will satisfy sophisticated users, but may be a bit intimidating for those uncomfortable typing a few lines of Visual Basic-like code. AppScan's support for custom checks is easy to use, but limited. This leaves buyers almost completely dependent on AppScan for application vulnerability test signatures and analysis logic. AppScan makes it far easier to run and modify scheduled automated scans than WebInspect.
The promising news is that SPI Dynamics and Sanctum continue to develop and improve their products. Vulnerability identification algorithms will continue to be refined, improving accuracy. New features and functionality should make the products easier to use and more scalable. Stay tuned.
But the inescapable conclusion is that while both AppScan and WebInspect can be useful tools for assessing Web application vulnerabilities, they are still immature products. They have significant value, but admins can't rely on them to meet all their Web app assessment needs--at least not yet.
About the authors:
Kelly White is a senior security engineer with TruSecure Corp.'s Enhanced Services Group.
Yong-Gon Chon is director of TruSecure Corp.'s Enhanced Services Group. TruSecure publishes Information Security.