Static verification occurs during the analysis stage in a software program's development lifecycle and involves...
examining all of an application's source code. This is very important for complex applications, which may have certain sections of code that are rarely executed once live. The aim of static verification is to uncover and remove coding flaws that result in vulnerabilities, such as buffer overflows, invalid pointer references and uninitialized variables. The testing is usually completed with automated tools, but trained developers can also be used. Because code can run to several hundreds of thousands of lines, though, errors and mistakes can be missed. Complex program structure may handle dynamic data, for example, and the interaction of multiple functions can generate unanticipated errors.
Therefore, once the software is functionally complete, dynamic verification should be used to investigate how an application actually behaves when it is executed and how it interacts with other processes and the operating system itself. Although static analysis has the advantage of finding errors early in the development cycle, dynamic verification -- often referred to as the test or experimentation stage -- ensures that the code is tested in real-life attack scenarios. As applications become more complex, it is getting harder and harder to dynamically test all of the possible environmental permutations that an application may face in the real world.
Many developers are now using fuzzing, a technique that bombards a running program with random data to test the robustness of its code. If the fuzz data causes the program to fail, crash, lock up, consume memory or produce uncontrolled errors, the developer knows that there is a flaw somewhere within the code.
The best testing approach is to use a combination of static and dynamic verification tools that continually check for technical and logical vulnerabilities during the development cycle. Because a poorly written application can create holes in an otherwise robust and secure system, the verification process ensures that vulnerabilities are not inadvertently introduced when the application is deployed.
By reducing the number of possible exploitable flaws, there will be fewer ways that a potential hacker can try to exploit the application and the system on which it runs. To further the verification process, there should ideally be procedures for completing component-level integration testing, system integration testing and deployment testing. Also the verification process should always be repeated when the business logic in the application changes. Such repetition can evaluate the impact of any changes on overall system and application security.
Dig Deeper on Software Development Methodology
Related Q&A from Michael Cobb
Oracle has moved from using a modified version of CVSS v2.0 to CVSS v3.0. Expert Michael Cobb explains criticism of the old version, and the changes ...continue reading
QuickTime for Windows was found to have two zero-day vulnerabilities, and was then suddenly moved to end of life by Apple. Expert Michael Cobb ...continue reading
Google's second Android Security Report revealed changes and upgrades made to the OS. Expert Michael Cobb covers the important takeaways for ...continue reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.