Many software developers, legitimate security researchers and cybercriminals now use fuzzing -- a technique that bombards a running program's inputs with invalid, unexpected, or random data -- to test the robustness of its code. If the fuzz data causes the program to fail, crash, lock up, consume memory or produce uncontrolled errors in response to this pa-rameter manipulation, the developer or researcher knows that there is a flaw somewhere within the code. This is why fuzzers are often termed fault injectors, while fuzzing is also referred to as robustness testing or negative testing. The original fuzzer, Fuzz, was developed at the University of Wisconsin Madison in 1989 by Professor Barton Miller and his students.
Microsoft uncovered more than 1,800 bugs in Office 2010 by running millions of fuzzing tests using not only machines in the company's labs, but also idle PCs throughout the company. Previous fuzz tests had involved a tester setting up a fuzzer on a single machine and then letting it run for as long as a week. I doubt that your applications are as large or as complex as Microsoft Office, but fuzzing can certainly play a role in your secure software development lifecycle.
As fuzzing generates invalid input, it's especially good at testing error-handling routines and finding buffer overflow, denial of service (DoS), SQL injection, cross-site scripting (XSS), and format-string bugs. It is also useful for finding memory-related bugs in C or C++ appli-cations, which can be a security vulnerability. You obviously have to make a record of the values used during a fuzz test and keep any debug information generated by the fuzzer so that if an error does occur, you can reproduce it. This is best done by creating a simple test case to isolate the error and make the problem easier to understand and fix.
One common approach to fuzzing is to define lists of values that are known to be dangerous, called fuzz vectors, and then inject them into the application. So for example, where the application is expecting positive integer values, you would send it zero, negative and large numbers. For characters you would send escaped, interpretable characters, quotes and system commands, and if the application reads or uses other files, you would send it corrupted or unexpected file formats. However the more "application-aware" a fuzzer is, the fewer unusual errors it's likely to find. This is why some developers still favor an exhaustive and random approach, free of any preconceptions about the software's behavior. Fuzzing can help uncover potential logic flaws, but it can be difficult to reconstruct what sequence of events and values actually caused the application logic to fail.
Adding fuzz testing to an internal software development program will certainly improve the reliability and security of applications because it often finds errors and oversights that code reviews and human testers would fail to find (or even think to test for). Because the method involves testing applications with fuzzing tools now being used by hackers to find vulnerabilities, it may help your organization find application flaws before the bad guys do. However, fuzz testing should be combined with other testing techniques; fuzzers don't always pinpoint vulnerabilities that don't cause a program to crash, such as poorly implemented encryption or other data protection routines. It's important to treat fuzz testing as a bug-finding process rather than an assurance of quality.
For more information:
- Should fuzzing be part of the software development lifecycle? Read more.
- Learn how to use Nessus for vulnerability screening in this screencast.
This was first published in May 2010