Static verification occurs during the analysis stage in a software program's development lifecycle and involves...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
examining all of an application's source code. This is very important for complex applications, which may have certain sections of code that are rarely executed once live. The aim of static verification is to uncover and remove coding flaws that result in vulnerabilities, such as buffer overflows, invalid pointer references and uninitialized variables. The testing is usually completed with automated tools, but trained developers can also be used. Because code can run to several hundreds of thousands of lines, though, errors and mistakes can be missed. Complex program structure may handle dynamic data, for example, and the interaction of multiple functions can generate unanticipated errors.
Therefore, once the software is functionally complete, dynamic verification should be used to investigate how an application actually behaves when it is executed and how it interacts with other processes and the operating system itself. Although static analysis has the advantage of finding errors early in the development cycle, dynamic verification -- often referred to as the test or experimentation stage -- ensures that the code is tested in real-life attack scenarios. As applications become more complex, it is getting harder and harder to dynamically test all of the possible environmental permutations that an application may face in the real world.
Many developers are now using fuzzing, a technique that bombards a running program with random data to test the robustness of its code. If the fuzz data causes the program to fail, crash, lock up, consume memory or produce uncontrolled errors, the developer knows that there is a flaw somewhere within the code.
The best testing approach is to use a combination of static and dynamic verification tools that continually check for technical and logical vulnerabilities during the development cycle. Because a poorly written application can create holes in an otherwise robust and secure system, the verification process ensures that vulnerabilities are not inadvertently introduced when the application is deployed.
By reducing the number of possible exploitable flaws, there will be fewer ways that a potential hacker can try to exploit the application and the system on which it runs. To further the verification process, there should ideally be procedures for completing component-level integration testing, system integration testing and deployment testing. Also the verification process should always be repeated when the business logic in the application changes. Such repetition can evaluate the impact of any changes on overall system and application security.
Dig Deeper on Software Development Methodology
Related Q&A from Michael Cobb
Is cookie encryption enough to protect sensitive information? Expert Michael Cobb explains how salted hashes can prevent attacks, and the secure way ...continue reading
A vulnerability was found in the Blackphone's Icera modem. Expert Michael Cobb explains how attackers could hijack the device, and if this would ...continue reading
Oracle is killing off the Java browser plug-in due to security risks. Expert Michael Cobb explains the next steps for enterprises with Java-based ...continue reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.