Jakub Jirsk - Fotolia

Philip Tully: AI models are cost prohibitive for some enterprises

Philip Tully discusses the expensive and time-consuming work of building AI models and how those models can become the target of cyberattacks by malicious actors.

After years of uncontrolled hype surrounding artificial intelligence and machine learning, experts are seeing a more mature view of what AI can and should be used for in cybersecurity.

At the same time, there is an AI arms race gaining steam as threat actors find ways to attack AI models and decrease their effectiveness, and find ways to use AI to automate some attack vectors.

At RSA Conference 2018, Philip Tully, principal data scientist at ZeroFox, talked about the benefits and pitfalls of using artificial intelligence (AI) for security, as well as how malicious actors can use AI to bolster cyberattacks.

This is part one of SearchSecurity's conversation with Tully. In part two, he discussed how malicious actors could use AI to launch more efficient cyberattacks.

It feels like there's a big push this year compared to the last couple of years where people are trying to get more realistic expectations of what AI can actually do.
 
Philip Tully: It's part of a maturation. People are already looking to IoT or blockchain as the next new hype word and the next new thing to gain traction. But, AI -- or more specifically, cheap machine learning and data-driven analysis -- is one of those things that's been super effective for some things, definitely not like a peg that can fit in every single hole.

In some problems and domains, signature is still the reigning king and easier detection techniques like simple word matching or simple signature-based ideas are still the most effective and understandable way to go about detecting and preventing a threat, whatever that nebulous threat might be.
 
Knowing that IoT is coming and the amount of data it generates is going to grow exponentially, how does an enterprise figure out if they even need a data scientist and to work on AI?
 
Tully: IoT is recording everything we do but it's going to get to the level that the machines will know if we need to refresh our bread or refresh our milk or we're running out of beer. But all of that personal information that's not already exposed will be represented digitally somehow. And so attacks are going to get more personal.

Philip Tully, principal data scientist at ZeroFoxPhilip Tully

In terms of when and why to use machine learning, this is a question that we ask ourselves a lot because machine learning itself is an investment. Data science shops are not cheap. The tools they use are not cheap and [neither is] the process of gathering enough curated and worthwhile data from which to make a model and to break your data up further into whatever you might deem as malicious and whatever you might deem as benign.

To take a sample of data and label it accordingly like that is an extremely time-consuming process that takes a lot of effort and, potentially, domain expertise. You can't necessarily just outsource security data, there's privacy implications there too. I'm going to outsource something that [belongs to] a customer of mine, it's sensitive to them, are they're going to be happy if I outsource this? Am I legally obligated not to outsource this data, to have it be labeled by a third party? There's a lot of complications involved there.

As a boss or a data science lead or product manager, you want to basically try to avoid these techniques as much as you possibly can because it's expensive, both in terms of time and money.

What do enterprises need to monitor when training AI models?

Tully: You can basically start to ask yourself questions like: Do the current techniques I have suffice? If I have a signature-based tool, am I making sure that I am catching all the things that I want to catch so that I'm not letting any of the bad things through?

On one side of the house if I'm calling something malicious that's not really malicious, this is a false-positive and if you see this crop up in your alerts or in your dashboard, as an analyst or as a customer, you're going to get upset because you're just having to go through a bunch of noise. This is unimportant, unactionable data. I don't care about this. That's one side.
 
And then sometimes there are false-negatives -- the other side of the house -- which is when you're missing valuable malicious things; you're letting stuff slip through your perimeter and that can also be extremely bad too. It's also hard to know. With a false-positive, you actually see the alert and you say, 'OK. This is not important to me.' At least it's visible to you. When a false-negative slips through, when you miss something, who knows? How do I even find that out?

What we try to advocate for is to pen-test your own models, pen-test your own methods. If you have a bunch of data that you already know is malicious or benign that you can get from open sources or from your historical experiences in protecting your customers -- any data you can -- try to take a bunch of rules that are simple at first and pass that data through it and then look and see how well it's doing.
 
Just query your own methods and then make changes until you can either convince yourself that you can make enough small changes around the edges by changing a few signatures or maybe including more of a [regular expression] pattern, which is not AI, but it's like a more abstract way to represent a URL or a file. And then if you can get far enough there, convince yourself, 'OK, I don't need to use machine learning for this situation,' you're golden, because then you can avoid having to invest all that money and time to developing something in-house.
 
If you reach a certain point and you can't quite get the accuracy enough within your product, or to make your customers happy, or whatever it may be, then you have to take that next step and take that data and label it and go through this whole process of training up a model that can be more predictive and proactive in nature.

How can AI models be pen-tested?

Tully: You need to pen-test your own models to improve them and catch all those holes that might exist. And you can automatically generate bad samples to train your model. And this is kind of like you have a neural network on defense and you have a neural network on offense and over time it's called adversarial neural networks and adversarial learning. And so you have one that's a classifier or is trying to distinguish good from bad and the other is a generative model and then over time the generative model can start to refine the edges of this classifier and to plug up all those holes in the long-run.

What we're trying to advocate for it is a type of technique, whether it's manual, so a single person can penetrate their own model and pen-test their own model. Or, it's automated, and that would be like an adversarial neural-network type situation, which maps, I think, really well to this problem spaces domain. 

Can AI help security teams keep pace with malicious actors?

Tully: We all like to look at the evolution of data-driven security as beforehand it was signature-based, bottom-up approach. This was good because there were certain high-level features that you're able to identify that were comprehensive and included many different samples. You can cast a decently wide net, but the adversary or this nebulous attacker was always kind of one step ahead because they could change something slightly to avoid this filter and get through.

OK, you found that thing that got through, so I can write a new signature to prevent that, but then the attacker can always slowly tune and still break through. Machine learning [ML] is even more abstract in terms of the patterns that it could extract from this data and so it's a lot more proactive in nature. You can really generalize a lot better.

That being said, machine learning is not panacea. It's not the final solution by any means. Just because you use ML doesn't mean that you are secure. It is a technique and it's more effective and it's more proactive. It has its benefits, as I said before, but it has its own kind of weaknesses and susceptibilities. 
 
One of those weaknesses is that sometimes with machine learning and AI it can be difficult to understand what is happening under the hood, correct?
 
Tully: A huge topic in ML is interpretability of models. You'll start to see this come out more and more, especially in a field like cybersecurity where everyone is saying the same BS via AI and ML.

You're going to probably start to see in the next year or two the vendors or the practitioners who are going to be able to provide more context around each classification that the model produces or predicts. So, not only is it malicious, but here's exactly why it's malicious, and I can highlight the parts of the file or I can highlight the parts of the social media, tweak or malicious link that's coming out and that's exactly why we deemed it malicious.

As soon as you start to get that deeper level of context, which isn't readily available in the current iteration of a lot these deep learning models, you have to train models on top of them to extract that information. I think the first pioneers to start doing that and productionize it and start doing more highlighting and contextual extraction from this data are going to be able to advertise that as a differentiator a lot better than their competitors will. You're not going to be able to hide behind that mantra of like, 'OK, we use ML, end of story,' which is what a lot of people are doing now.

Once you have that context, or even now without it, how do you go about fixing issues that arise in an AI model? 

Tully: Part of the data scientist's process should always be that you're retraining models. These are very dynamic things, especially in this space; data itself changes. In social media, which is our domain, something on Twitter six months ago or today might not [have been] part of the English language six months ago. There are new trends that arise. There are new languages and acronyms that get combined. It's extremely fluid and dynamic depending on the popular culture, whatever it may be. 
 
That's what we call that nonstationarity; the data itself is nonstationary, it's the distribution of the data that is shifting over time. There's some solutions to that, but when you're in an adversarial setting in cybersecurity, the data is not only nonstationary, but the attacker is actively trying to bypass your models, so it's even more extreme.

The problem is not trivial because you have to always retrain and always have to plug data back into your model, especially fresh data. You don't want to necessarily discard all the old data because -- from experience -- a lot of attackers will recycle all techniques because, why not? It's no real cost. And so you might start to see attacks that happened five years ago. But, in terms of the model itself, you can actually start to do stuff like penalize or impose a cost upon older staler data [and] the fresher stuff is the most important to the model. You can train it this way.

What's important, I think, is to have a human in the loop. Like you said before to start, the amount of data is so big -- the scale of it is so large -- that humans can't possibly deal with all of the data. You can be smart about it and you can have humans sub-sample the input and sub-sample the answers that a model is producing and the verdicts that the classifier is rendering -- is this malicious or benign -- and spot check some of the things. That sampling of the stream of answers that it's producing can tell you things about, 'Well, this model is underachieving and maybe we should prioritize this in the next spread or we should try to focus on this again sooner rather than later because it's going to start to affect our customers.'
 
Recently, IBM released an open source toolbox to help protect AI models from attacks where the aim is to make a model less accurate, or to steal data -- like credit card numbers -- that had been processed by the AI model. Can you explain how these types of attacks work?

 
Tully: The first type of attack you describe is a poisoning attack. A lot of practitioners -- myself included -- rely on open source data to augment their models because that's out there, it's labeled already, and so it's kind of a shortcut. If I want to build a model from scratch I can leverage that data to give me a starting point so I don't start from nothing, I start somewhere and then fine-tune it from there.

[Poisoning is] a very theoretical attack, I've never heard of this being waged in the wild, but it's definitely a possibility where if I'm a crafty attacker, if there's an open source data set available that I know that a security vendor or practitioner is going to train their models on, I can implant some stuff in there that's purposefully incorrect. And once I know that those churning samples are incorrect, I can design an attack to bypass it. When someone is using that data to train their models, then I basically have a backdoor through the model already.

There are other more directly applicable attacks, which occur once the model has already been trained and there's some features of deep learning models that make them super susceptible to gradient-based attacks which are, 'OK, you can block this sample, but as soon as you start to descend the gradient or as soon as you start to take the sample and shift it in a way that the model hasn't necessarily been exposed to before, in the same way that a signature is weak, it's going to call that incorrect.'
 
The model can be actually, embarrassingly wrong with some inputs that it gets presented. And the best kind of example of this is in the image domain. So, you have an image of a bus and then you provide it some noise, white noise, which shifts the pixels, each of them, in a very uninformed way that to a human it looks like the same exact image. The models should characterize that as a bus, but it doesn't. It gets it wrong. It calls it an ostrich. And, it's embarrassingly overconfident with that answer.

So there are these types of weaknesses that can happen after training. There's poisoning before and then there's gradient-based attacks that can be waged after the fact.

From the attacker's perspective -- to put a black hat on -- they don't care what is protecting it, they don't care if it's machine learning; they don't care if it's nothing; they don't care if it's a word match or signature-based detection. They're going to attack anyway. As long as there's financial incentives for them to do, so they're going to proceed.

Dig Deeper on Security operations and management

Networking
CIO
Enterprise Desktop
Cloud Computing
ComputerWeekly.com
Close