maxkabakov - Fotolia

Blue Hexagon bets on deep learning AI in cybersecurity

Cybersecurity startup Blue Hexagon uses deep learning to detect network threats. Security experts weigh in on the limitations of AI technologies in cybersecurity.

With corporate networks becoming a prime target for threat actors, software vendors are beginning to use deep learning and other types of AI in cybersecurity. While deep learning does show promise, industry experts are skeptical.

The threat landscape is evolving, and existing network security measures like signature-based detection techniques, firewalls and sandboxing fail to keep up, said John Petersen, CIO at Heffernan Insurance Brokers, based in Walnut Creek, Calif. He sought out a deep learning application with the intelligence built in to monitor network traffic that detects threats as they come in real-time.

"Endpoint security is not secure enough anymore," Petersen said. "You can't secure every device on the network; you need something watching the network. So, we started as a company looking at what options we had out there that could be monitoring the network that could learn and identify zero-day attacks as they come in."

That led him to cybersecurity startup Blue Hexagon's deep-learning-powered network security platform, which was able to detect an Emotet infection as soon as it hit one of Heffernan Insurance Brokers' servers.  

John Petersen, CIO at Heffernan Insurance BrokersJohn Petersen

"Blue Hexagon was able to find it right away and alert us, so we were able to take that server offline," he said. "Now, we have a lot more [network] visibility than we ever did."

Nayeem Islam, chief executive and co-founder of Blue Hexagon and the former head of Qualcomm research and development, said he believes automated threat defense is the future of security. Deep learning and neural network technology are some of the most advanced techniques that can be used to help defend an enterprise from the velocity and volume of modern-day threats, Islam said.

"What we were recognizing was that deep learning was having a significant impact on image and speech recognition. And, at the same time, we were also recognizing that these techniques were not being used in computer security," Islam said.

The Sunnyvale, Calif., network security provider emerged from stealth mode earlier this year.

Other companies use deep learning and related forms of AI in cybersecurity software, including IBM Watson for Cybersecurity, Deep Instinct and Darktrace.

Nayeem Islam, CEO and co-founder at Blue HexagonNayeem Islam

Deep learning is unique because it determines what's good and bad by looking at network flows, Islam said.

"The automation that deep learning provides reduces the amount of human intervention needed to detect threats," he said. "People have networking infrastructure, and we sit behind the traditional defenses and provide an additional layer of defense; that's how you would deploy us."

The company's deep learning platform focuses on threats that pass through the network, Islam explained. It looks at a packet as they flow through the network and applies deep learning. The Blue Hexagon deep learning models inspect the complete network flow -- payloads, headers, malicious URLs and C2 communications -- and are able to deliver threat inference in less than a second, according to the company. Threat prevention can then be enabled on firewalls, endpoint devices and network proxies.

"We train our deep learning models with a very diverse set of threat data," he said. "We actually do this in the cloud -- on the AWS infrastructure -- and have been working with them since inception to ensure the infrastructure is optimized for security."

Blue Hexagon; deep learning in cybersecurity
The Blue Hexagon dashboard's threat view

Experts provide caution on AI in cybersecurity

Deep learning is indeed an interesting machine learning technique and can be used for many security use cases, said Gartner analyst Augusto Barros. But more important than understanding what it can do is to understand what deep learning in cybersecurity cannot do, Barros added.

New threat types ... won't be magically identified by machine learning.
Augusto BarrosAnalyst, Gartner

"Many machine learning implementations, including those using deep learning, can find threats, such as new malware, for example, that has common characteristics with what we already know as malware," Barros said in an email interview. "They can be very effective in identifying parameters that can be used to identify malware, but first we need to feed them with what we know as malware and also with what we know as not malware so they can learn. New threat types ... won't be magically identified by machine learning."

Until a couple of years ago, malware detection technology was being developed -- or trained in the case of machine learning -- with file samples, Barros said. Deep Learning can be very useful in identifying which characteristics of the files are most likely to determine if something is malware or not, he added.

"But when what we call fileless attacks started to appear, all those machine-learning-based tools analyzing files were not able to detect those attacks," Barros said. "They were just looking at the wrong place. And who does tell them where they should be looking? Humans."

Barros said he doubts any machine-learning-based system would be faster than simple signature matching. When it comes to prevention, he said, it is important to be sure of what is detected before deciding to intervene.

"Although signatures will miss unknown threats, they are very certain about what we know; antiviruses do that quite well," Barros said. "With machine learning, you'll only get a percentage of certainty -- the algorithms tell you the changes of something being bad is xx% -- so using that for intervention can be really problematic and with chances of disrupting systems."

With enterprise network complexity increasing over time, teaching the algorithm to tell good from bad is actually much harder than the classic deep learning success stories like face recognition, said Gartner analyst Anton Chuvakin.

"The variety of what is normal, what is legitimate, what is actually acceptable to business is so wide that the training of the algorithm is going to be really difficult," Chuvakin said.

When it comes to domains of security like malware detection, deep learning is working because there is a pretty large pool of data about legitimate software and malware that can be used to train the algorithm, he said.

"But, to me, for [network] traffic, it has a much lower chance of working," Chuvakin said.

To really succeed with deep learning in cybersecurity, there must be a very large volume of labeled data, he said.

"It took some of the malware analysis vendors years to accumulate [data on] malware," he said. "But, where is that data for traffic? Nobody has been collecting malicious traffic at wide scale over many years, so there is no way to point at a repository and say, 'OK, I'm going to train my algorithm on this traffic,' because that doesn't really exist. Moreover, in many cases, they don't know whether the traffic is malicious or not."

Dig Deeper on Threat detection and response

Networking
CIO
Enterprise Desktop
Cloud Computing
ComputerWeekly.com
Close