News Stay informed about the latest enterprise technology news and product updates.

IBM's Watson for Cyber Security puts a new face on machine learning

The IBM Watson for Cyber Security beta program aims to augment human intelligence, but experts question if IBM can distinguish it from other machine learning products.

IBM Watson may be able to win Jeopardy!, but security experts are skeptical about the technology's ability to defeat...

today's cyberthreats.

The IBM Watson for Cyber Security beta program launched this week with 40 partners around the world in an effort to help security analysts make better, faster decisions from vast amounts of data, but experts said this is the same promise offered by many other products.

IBM said Watson for Cyber Security will feature natural language processing that can help it to "understand the unique language of security."

"The truth is, a lot of security vendors today are attaching [artificial intelligence] or cognitive to a number of products that are really just advanced analytics or machine learning, which are also important elements that can help in the fight against cybercrime," Diana Kelley, executive security adviser for IBM Security, told SearchSecurity. "What Watson will bring to the equation that is unique is the ability to digest vast amounts of both structured data, as well as all of the intelligence that exists in natural language, like blogs, white papers and research reports. For example, there are around 10,000 security research papers published each year, and 60,000 security blog posts published every month."

IBM described Watson as a cognitive technology, rather than artificial intelligence (AI), and Mike Stute, chief scientist for cloud networking company Masergy Communications Inc., based in Plano, Texas, said this is an important distinction to be made.

"I applaud IBM for taking a stance on this. AI does not yet exist in a general form. There are two types of AI: special AI (SAI) and general AI (GAI). Special AI is a system that is able to perform a single task well that requires human intelligence. General AI is what we all think about as AI -- a computer that has the ability to think at or beyond human level, processing sensory data, making inferences, logical decisions, etc.," Stute told SearchSecurity. "I think IBM is saying, 'This is not AI, it is cognitive,' meaning they are acknowledging this is machine learning, not AI. As much as IBM would like us to believe Watson is a GAI, they are admitting it is really a SAI based on machine learning."

Chris Pogue, CISO for Nuix, based in Herndon, Va., said this may just be IBM's "way of distinguishing itself from the competitors and using a term that has not been overused or applied in multiple areas of technology. By definition, cognition and intelligence, while not identical, share characteristics, such as capacity for learning, the process of knowing or perceiving and grasping truths that make them nearly synonymous."

IBM said the Watson for Cyber Security beta program will focus on detecting if an attack is associated with a known malware or cybercrime campaign and identifying suspicious behavior that may be malicious.

Simon Crosby, co-founder and CTO of endpoint security vendor Bromium Inc., based in Cupertino, Calif., said these features don't distinguish Watson at all.

"AI for malware detection is actually probably useless. Over 99% of malware is seen exactly once and morphs into a new undetectable form on each click. AI is great at speeding up learning from big data stores of monitoring data to try to pick out anomalies," Crosby told SearchSecurity. "If we assume that it will be perfect, we are in for big disappointments. There are only bold vendor marketing claims at the moment and no way to compare different tools head to head."

Kelley said the addition of information from human security intelligence could set IBM Watson for Cyber Security apart.

"Eighty percent of the security intelligence out there today is created by humans and designed to be read by humans -- blogs, research reports and other natural language documents," Kelley said. "This is more information than a human analyst could ever read, but isn't accessible to traditional security technologies, so bringing AI systems with natural language processing into the equation will be critical as we move forward. There's just too much threat data for analysts to keep up."

Stute said IBM Watson has displayed the ability to use machine learning "in certain data sets, but the concept that you can take machine learning expertise and apply it to any category isn't correct."

"Security machine learning experts will still do a better job at building cognitive security systems than general machine learning experts. I would imagine IBM has more value than some, but less than others," Stute said, adding the Watson brand could confuse customers. "[IBM has] several machine learning verticals that they play in [medical, contextual language, audience message targeting] that they mark as Watson as a way to make Watson seem like an SAI. But in reality, each of these is its one artificial neural network technology (ANN), not one ANN doing all these things."

Kelley said the future of cognitive systems and AI in cybersecurity should focus on augmenting human intelligence.

"We still need humans to be the ultimate decision-makers. AI systems can help security analysts focus on the most important threats, provide them with the necessary context about those threats, as well as recommendations to address or eradicate the threats. But, ultimately, there are a lot of factors that only humans can fully understand, and critical business and risk decisions that only humans can make," Kelley said. "AI technologies may evolve with rule sets for automated actions for very specific types of events, but we need human judgment to understand the full picture of cybersecurity."

Crosby agreed with this, because he said AI has a "fundamental limit."

"The fundamental limit was established by Turing 80 years ago, and it is called the halting problem," Crosby said. "Essentially, it is impossible to build a perfect detector. But AI used as a tool to help speed human decision-making is a productive and rapidly advancing field."

Pogue said, "As humans, the most intelligence we have to operate with, the more informed our decision-making process becomes."

"If that process can be mapped through a predefined set of criteria to identify attacker tactics, techniques and procedures, and indicators of compromise, then it's logical to assume that a computer can programmatically make those decisions faster, at greater scale and a larger corpus of data," Pogue said.

Next Steps

Learn more about artificial intelligence technology being poised to power mobility.

Find out how AI may soon find and patch software bugs automatically.

Get info on how software deals with conversational language.

Dig Deeper on Risk assessments, metrics and frameworks

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

2 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

What do you think of the IBM Watson cybersecurity beta program?
Cancel
>> "The fundamental limit was established by Turing 80 years ago, and it is called the halting problem," Crosby said. "Essentially, it is impossible to build a perfect detector. But AI used as a tool to help speed human decision-making is a productive and rapidly advancing field."
<<

Not to split hairs, but Turing established that no computational process could invariably tell whether any given second computational process was going to halt without running that second program. He makes no special exceptions for human  minds, which presumably operate computationally. If AI isn't as good as humans when it comes to judgment calls in security, that's because AI isn't, at least as present, capable of the same computations as human minds, but Turing doesn't say anything about whether AI can or can't achieve that (indeed, it seems pretty likely that he thought it was at least theoretically possible). 
Cancel

-ADS BY GOOGLE

SearchCloudSecurity

SearchNetworking

SearchCIO

SearchEnterpriseDesktop

SearchCloudComputing

ComputerWeekly.com

Close