Static verification occurs during the analysis stage in a software program's development lifecycle and involves...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
examining all of an application's source code. This is very important for complex applications, which may have certain sections of code that are rarely executed once live. The aim of static verification is to uncover and remove coding flaws that result in vulnerabilities, such as buffer overflows, invalid pointer references and uninitialized variables. The testing is usually completed with automated tools, but trained developers can also be used. Because code can run to several hundreds of thousands of lines, though, errors and mistakes can be missed. Complex program structure may handle dynamic data, for example, and the interaction of multiple functions can generate unanticipated errors.
Therefore, once the software is functionally complete, dynamic verification should be used to investigate how an application actually behaves when it is executed and how it interacts with other processes and the operating system itself. Although static analysis has the advantage of finding errors early in the development cycle, dynamic verification -- often referred to as the test or experimentation stage -- ensures that the code is tested in real-life attack scenarios. As applications become more complex, it is getting harder and harder to dynamically test all of the possible environmental permutations that an application may face in the real world.
Many developers are now using fuzzing, a technique that bombards a running program with random data to test the robustness of its code. If the fuzz data causes the program to fail, crash, lock up, consume memory or produce uncontrolled errors, the developer knows that there is a flaw somewhere within the code.
The best testing approach is to use a combination of static and dynamic verification tools that continually check for technical and logical vulnerabilities during the development cycle. Because a poorly written application can create holes in an otherwise robust and secure system, the verification process ensures that vulnerabilities are not inadvertently introduced when the application is deployed.
By reducing the number of possible exploitable flaws, there will be fewer ways that a potential hacker can try to exploit the application and the system on which it runs. To further the verification process, there should ideally be procedures for completing component-level integration testing, system integration testing and deployment testing. Also the verification process should always be repeated when the business logic in the application changes. Such repetition can evaluate the impact of any changes on overall system and application security.
Dig Deeper on Software Development Methodology
Related Q&A from Michael Cobb
Expert Michael Cobb explains the differences between symmetric and asymmetric encryption algorithms, common uses and examples of both encryption ...continue reading
Google has added Linux kernel memory protection and other security measures to the Android OS. Expert Michael Cobb explains how these features work ...continue reading
The HummingBad malware has infected 10 million mobile devices worldwide. Expert Michael Cobb explains how this exploit enables click fraud and other ...continue reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.