Why McAfee CTO Steve Grobman is wary of AI models for cybersecurity

Artificial intelligence has become a dominant force in the cybersecurity industry, but McAfee CTO Steve Grobman said it's too easy to make AI models look more effective than they truly are.

Artificial intelligence continues to permeate the information security industry, but Steve Grobman has reservations about the technology's limitations and effectiveness.

Grobman, senior vice president and CTO of McAfee, spoke about the evolution of artificial intelligence at the AI World Conference in Boston earlier this month. McAfee has extolled the benefits of using AI models to enhance threat intelligence, which the company said is enormously valuable for detecting threats and eliminating false positives. But Grobman said he also believes AI and machine learning have limitations for cybersecurity, and he warned that the technology can be designed in a way that provides illusory results.

In a Q&A, Grobman spoke with us following AI World about the ease with which machine learning and AI models can be manipulated and misrepresented to enterprises, as well as how the barrier to entry for the technology has lowered considerably for threat actors. Here is part one of the conversation with Grobman.

Editor's note: This interview has been edited for length and clarity.

What are you seeing with artificial intelligence in the cybersecurity field? How does McAfee view it?

Steve Grobman: Both McAfee and really the whole industry have embraced AI as a key tool to help develop a new set of cyberdefense technologies. I do think one of the things that McAfee is doing that is a little bit unique is we're looking at the limitations of AI, as well as the benefits. One of the things that I think a lot about is how different AI is for cybersecurity defense technology compared to other industries where there's not an adversary.

Down the street at AI World, I used the analogy that you're in meteorology and you're building a model to track hurricanes. As you get really good at tracking hurricanes, it's not like the laws of physics decide to change on you, and water evaporates differently. But, in cybersecurity, that's exactly the pattern that we always see. As more and more defense technologies are AI-based, bad actors are going to focus on techniques that are effective at evading AI or poisoning the training data sets. There are a lot of countermeasures that can be used to disrupt AI.

And one of the things that we found in some of our research is a lot of the AI and machine learning models are actually quite fragile and can be evaded. Part of what we're very focused on is not only building technology that works well today, but looking at what can we do to build more resilient AI models.

One of the things that we've done that's one of the more effective techniques is investigating this field of adversarial machine learning. It's essentially the field where you're studying the technology that would cause machine learning to fail or break down. We can then use adversarial-impacted samples and reintroduce them into our training set. And that actually makes our AI models more resilient.

Thinking about the long-term approach instead of just the near term is important. And I do think one of the things I'm very concerned about for the industry is the lack of nuanced understanding of how to look at solutions built on AI and understand whether or not they're adding real value. And part of my concern is it's very easy to build an AI solution that looks amazing. But unless you understand exactly how to evaluate it in detail, it actually can be complete garbage.

Speaking of understanding, there seems to be a lot of confusion about AI and machine learning and the differences between the two and what these algorithms actually do for, say, threat detection. For an area that's received so much buzz and attention, why do you think there's so much confusion?

Grobman: Actually, artificial intelligence is an awful name, because it's really not intelligent, and it's actually quite misleading. And I think what you're observing is one of the big problems for AI -- that people assume the technology is more capable than it actually is. And it is also susceptible to being presented in a very positive fashion.

I wrote a blog post a while ago; I wanted to actually demonstrate this concept of how a really awful model could be made to look valuable. And I didn't want do it with cybersecurity, because I wanted to make the point with something everybody understands, because cybersecurity is nuanced and is a complex field. Instead, I built a machine learning model to predict the Super Bowl. It took as inputs things like regular-season record, offensive strength, defensive strength and a couple other kinds of key inputs.

The model executed phenomenally. It actually predicted 9 out of 10 games correctly that were sent into the model. And the one game that it got wrong, it actually predicted both teams would win. It's actually funny -- when I coded this thing up, that wasn't one of the scenarios I contemplated. It's a good example of a model [that] sometimes doesn't actually understand the nuance of the reality of the world, because you can't have both teams win.

But, other than that, it accurately predicted the games. But the reason I'm not in Vegas making tons of money on sports betting is I intentionally built the model violating all of the sound principles of data science. I did what we call overtraining of the model. I did not hold back the test set of data from the training set. And because I trained the model on data that was actually used within these 10 games that I sent it, it actually learned who the winner of those games were, as opposed to being able to predict the Super Bowl.

If you just send it data from games that it was not trained on, you get a totally different answer. It got about 50% of the games correct, which is clearly no better than flipping a coin. The more critical point that I really wanted to make was if I was a technology vendor selling Super Bowl prediction software, I could walk in and say, 'This is amazing technology. Let me show you how accurate it is. You know, here's my neural network; you send in this data and, because of my amazing algorithm, it's able to predict the outcome of the winners.'

And going back to cybersecurity, that's a big part of the problem. It's very easy to build something that tests well if the builder of the technology is able to actually create the test. And that's why, as we move more and more into this field, having a technical evaluation of complex technology that is able to understand if it is biased -- and if it is actually being tested in a way that will be representative of showing whether or not it's effective -- is going to be really, really important.

There has been a lot of friction and challenges with cybersecurity testing and product evaluations for a while now. But now, with AI, you're basically saying it has become even more challenging.

It's very easy to build an AI solution that looks amazing. But unless you understand exactly how to evaluate it in detail, it actually can be complete garbage.
Steve GrobmanCTO at McAfee

Grobman: Yes. It's going to get much tougher, because I think part of the challenge is we're moving beyond the world where things are either just good or bad. Where we're now getting more detailed data on what these AI models are predicting from probability of either being a certain type of threat or being abnormal behavior, there's a lot more nuance.

I don't know that the traditional approaches are necessarily bad, but we need to look at evolving them and especially make sure that we're testing things that will make sure that the assertions are actually being delivered upon.

In the threat landscape, has anything changed or shifted recently?

Grobman: Sophistication of attacks is definitely going up. We're actually doing some research to see whether we'll be able to validate some of our hypotheses, one of which is: Are cybercriminals starting to use AI and machine learning to make their attacks more efficient and more effective? It's one of the things I think a lot about, and it's hard to detect whether we're seeing it in the wild.

When I think about some of the things a cybercriminal needs to do, like classification of potential victims, a lot of it is well-suited for a machine learning workload. If you have a million potential victims and metadata on all of them, and if you can classify them into easy to exploit, hard to exploit and likelihood of high ROI, then you can now get your 1 million of potential victims down to, 'These are my easy to exploit and likelihood to have a high ROI.' You'll focus on those victims versus burning a lot of cycles on other things.

There's been talk about adversarial AI and machine learning, but are we at the point yet where threat actors can search, for example, GitHub and grab some algorithms and start using them?

Grobman: It's actually pretty easy. Like I mentioned, I built that Super Bowl model in a weekend; that was not hard. If I sat down with you for an hour, I could have you building machine learning models. And I do think there's an interesting point there, which is the barrier to entry to use technologies like machine learning has come way down. It's not like you need to know how to write C code to build a neural network. You can download TensorFlow and write a 15-line Python script that actually works.

I actually did this for our sales force. I wanted to educate them on how machine learning works, and I built a simple model in real time on predicting whether an Olympic athlete was a volleyball player based on their height, weight, gender, and it was a very simple model. But it actually worked pretty well, because volleyball players are tall. Again, the barrier has come way down, and the technology is out there.

Dig Deeper on Security analytics and automation

Networking
CIO
Enterprise Desktop
Cloud Computing
ComputerWeekly.com
Close