In this interview with SearchSecurity, Dr. Anton Chuvakin, research vice president with Stamford, Conn.-based Gartner Inc., details his latest research on vulnerability assessment tools, including how the technology has changed in recent years, what separates vendors in the space, and ultimately, what enterprises should be looking for in such tools.
You can scan continuously, but [it doesn't matter] until the technology is different from monthly scanning technology.
Dr. Anton Chuvakin,
research vice president, Gartner Inc.
Some in the security industry have the perception that vulnerability assessment tools are all largely the same; is that true?
Dr. Anton Chuvakin: My main motivation for looking at these tools was that other people think [vulnerability assessment is] an old technology that pretty much appeared in the mid-1990's, and people think it [has] matured and that there are no more mysteries. I suspected that's not true because of all the changes that have happened in IT since these tools first appeared on the market. For example, when the first scanners appeared, there was no cloud; there was no virtualization, and certainly no mobility.
So the tools at this point can assess so many more things and so many different things, that [they were] worth exploring. And I had specific, probing areas based on client calls to Gartner. For example, because people scan huge networks, there is a question of how much data you get from vulnerability assessments. Back in the early days of security, people made jokes about how scanners produced 500-page reports of all the vulnerabilities, but this joke is not that funny anymore because most of the scanners now produce 10,000-page reports.
As you can imagine, no one would make use of a 500-page report. It's beyond the abilities of most organizations to go and patch all of them. So there are questions about data analysis, how to use a tool and how to build an operation to assess vulnerabilities.
What has changed in vulnerability assessment technology over the last two decades? What has improved, and what areas still need work?
Chuvakin: The core assessment technology, basically the way you connect to a machine, check for vulnerabilities and decide whether something is vulnerable, is still the same at a high level. Now though, you have to make many more decisions after you collect the data to avoid straining people with those multi-thousand-page reports. For example, prioritization algorithms, or how you process the data, so that you can show what is actionable and what needs attention.
Changes in vulnerability assessment technologies come [when applied to other new technologies], like when assessing virtual machines when they're not up because VMs didn't exist [when vulnerability assessment tools were first created]. Now, some vendors allow you to scan virtual machines when they are dormant through application programming interfaces. Some of the scanners can also connect to the management platforms for Amazon and other cloud providers to do discovery; not to discover machines running in the data center, but to find virtual instances. This discovery is a classic use case for vulnerability assessment technology, but it now works for virtual and cloud environments.
So even though the core, the engine that sends stuff to machines and gets data back, is the same, the things around it and the usage are different.
How are vendors differentiating themselves in this space?
Chuvakin: Vendors have similar technologies at the core, but they differ in the way they treat those edge cases, like virtual assessments, cloud assessments and dealing with huge quantities of data. Continuous scanning is a particularly interesting one, because companies will typically scan every month or every week, but it's part of a process, and you don't really know what is going one between scans.
So some people thought it would be cool to scan all the time, but guess what? You produce data all the time if you're scanning constantly. The team that's doing remediation isn't equipped to receive all that input, so why scan continuously if no one ever checks the data? You can scan continuously, but [it doesn't matter] until the technology is different from monthly scanning technology. Some of the vendors have spent a lot of time trying to make continuous scanning work.
What should enterprises look for in vulnerability assessment tools?
Chuvakin: The main requirement for most customers is that the scanner actually works well. When it comes to false positives, false negatives and misidentified vulnerabilities, a lot of people expect near perfection from scanners. They just want something that does the job effectively without too many false alarms. It's almost like when you're picking a car. Before you start about thinking about the colors and Bluetooth, you've got to be concerned with picking a car that can get you from point A to point B without breaking. So with vulnerability assessment tools, you still choose based on how well it performs at its main function: scanning.
Another criterion is being able to cover the various types of platforms that companies have, specifically through the two different methods of scanning. You can do what is called traditional scanning -- unauthenticated scanning -- or you can give the tool a username and password for the system so it can log in and look around on the inside. Authenticated scanning usually ends up being more accurate, but it does require platform support. When you connect to Windows machines, you can look for [a] registry, but if you connect to Linux machines, you [can't], because there won't be one.
Another interesting requirement for many companies is that a vulnerability assessment tool can be operationalized and work well in a very large environment. Think about this: If you're scanning five machines, you can just bring your laptop, scan the five machines, get the report and you're done. If you're scanning 10,000 machines or, God forbid, 100,000 machines, everything that was simple before becomes really difficult. You have to figure out where to put the scanners, how many to deploy, how you scan for firewalls, etc. The ability to work in a large enterprise environment, not just in terms of scanning, but alignment with operational models, is hugely valuable and can make or break a tool. They can all accurately scan just one machine, but providing useful data from a network with 100,000 machines is very different.
My final point would be about the volume of data and the ability to prioritize, which is still a painful challenge for many companies. If you're scanning a network of 10,000 nodes every day or every week, you can discover tens of thousands of issues. If you're a security guy and a report shows 2,000 high-severity vulnerabilities, 50,000 medium-severity and so on, you get terrified. You go to your operations guys that own the servers and ask them to patch all of those vulnerabilities, and they'll say you're insane and they only have time to patch 500 of those flaws.
So the challenge is how to look at the data and figure out which 500 vulnerabilities absolutely need to be patched, the ones that will give you the maximum risk reduction [should be] fixed. So this is kind of the holy grail of scanning at this point -- being able to scan, look at the data and figure out which vulnerabilities need to be patched now and which can be left for later. This decision is a hard one, and sometimes you need some context from the vendor and the outside threat landscape to make the call. To me, that's where all the fun developments will happen in vulnerability assessment technology.