Compare 5 SecOps certifications and training courses Cybersecurity team structure stronger with 3 new roles

Advance your security operations center with AI

Powering a security operations center with AI systems not only automates tasks, but also complements admins' efforts to more effectively combat threats and transform processes.

AI, once the technology of legend, is making its way into an organization near you -- if it hasn't already. Business use cases are rapidly expanding to include AI-enabled multitasking collaborative robots, or cobots, in manufacturing and autonomous transportation capabilities for piloting self-driving vehicles.

One industry sector that stands to benefit greatly from the integration of AI is infosec, namely in security operations centers (SOCs).

"For decades, those who work in SOCs have depended on tools that were great at gathering data, but then employees would need to do a significant amount of review and manual analysis of that data to determine what it means," said Rebecca Herold, IEEE member and founder and CEO of The Privacy Professor.

Modern and up-and-coming AI security technologies are ready to change that.

"Soundly engineered AI is now helping support better analysis of SOC data, much quicker analysis results and more accurate determinations for effective actions to take in response to the data," Herold said.

Quicker, more accurate analysis isn't the only way AI is benefiting SOCs. It's also crucial to solving an age-old security problem: employing proactive security instead of reactive.

"A well-designed AI SOC tool can ultimately become a proactively protective tool to prevent harms, instead of the reactive types of actions that have typically been used after harm has occurred using manual methods and processes," Herold added.

Read on as Herold discusses AI's influence on cybersecurity and infosec teams, the challenges that come with still-maturing AI cybersecurity tools and why AI is the future for SOCs across the globe.

Editor's note: This transcript has been edited for length and clarity.

How can AI transform traditional SOCs beyond improving the quality and speed of analysis?

Rebecca Herold: AI can also support automatically generated reports about SOC security levels and make the reports understandable for the business executives that want to know how -- and how well -- their processing activities are secured and where they are vulnerable to cyberattacks.

There are other possibilities as well. The SOC team often works with other teams throughout the organization, such as digital forensics, incident response, cybersecurity analysts, fraud investigators, engineers and organizational managers. Increasingly, SOC teams are using new technologies for security analytics, threat modeling and intelligence, impact analysis and for ensuring network, systems, applications and data security risks are identified, analyzed and quickly addressed. Historically, those activities were largely reactive and relied upon experienced and knowledgeable workforce members. However, a growing number of AI tools are appearing on the market intended to automate, improve and perform those historically manual activities, while also providing additional types of insights that were not possible using manual-only analysis. Some of the new capabilities are meant to identify when threats could result in attacks on the corporate network and also automatically shut down certain services or subnets based on activities that the AI determines will result in harm.

Are AI systems today mature enough for enterprise security operations? Which AI-enabled tools should enterprises consider?

Herold: Not all AI systems are created equal. Some have gone through long and arduous engineering, followed by strenuous testing, before being sold and used in production. Other vendors threw something together quickly, called it AI and then sold the system without thorough testing -- often putting much effort into marketing the unproven product.

Some AI tools that organizations are successfully using have been in use for several years already and are well vetted. These include those for anomaly detection that identify shifts or subtle differences in network or system behavior that could signal malicious activity that manual log reviews or traditional security tools would not be able to identify. AI is also being used for anti-money laundering and compliance enforcement, identifying suspicious behavior to support red-flag requirements or to determine the attributes that signal noncompliance activities.

Before making investments in AI tools, organizations need to carefully determine if the data that the AI tools need to use is already being collected in some way by the organization; if using such data for the intended AI purposes will violate any contractual or other legal requirements that the organizations operate under; and if they have the internal staff to support the AI tools. Many AI services provide contracted services to handle what organizations do not have the staff internally to perform, but the organization needs to understand what the additional costs will be for such services.

I see AI SOC tools creating opportunities for more information security pros to more efficiently perform their job responsibilities, while also using their new insights to improve the overall information security program and provide more collaborative activities with others outside of the SOC.
Rebecca Herold

Another consideration is how long the AI tool has been on the market. Unless an organization wants to take the risk of using unproven AI tools so it can be on the bleeding edge of AI tech use, it may want to wait to see how well the AI tool actually works, as determined by the experiences of the early adopters.

How will AI change the various roles of not only SOC employees, but workers across the organization?

Herold: AI will change the roles of most types of IT and information security employees. This is creating great concern for some workers -- they fear they will be replaced by AI-controlled robots. However, I see AI SOC tools creating opportunities for information security pros to more efficiently perform their job responsibilities, while also using their new insights to improve the overall information security program and provide more collaborative activities with others outside of the SOC.

What sort of learning curve will there be?

Herold: It depends upon how well the AI is engineered, if there are any flaws with the AI algorithms, and the types of tasks the AI tools were created to do or the decisions the AI tools were created to make. It is important for every organization considering AI tools to request validation and documented proof of the AI results and activities from the vendor offering them. Also, to reduce the learning curve, vendors should make a wide variety of training available, such as videos and interactive use case scenarios with the AI tool, along with good, old-fashioned documentation about the tool.

What are the top AI adoption challenges?

Herold: I've seen at least a couple of common ones. First, determining if the AI tool is valid, if it will provide accurate analysis, actions, etc., and if there are any flaws with logic that could result in recommendations or actions that are biased or harmful. With these comparatively new types of tools, there is not a lot of history that can be reviewed, and most SOC managers don't know the best questions to ask AI tool suppliers and vendors about how accurate the tools are or the breadth and depth of testing that was done to validate the accuracy of the AI tool and so forth.

The second adoption challenge is selling management on the idea of using AI. Unless management is highly tech-savvy, they will not understand the need to replace current tools with something else. Those who want to use the AI will need to demonstrate how the AI will 1) improve security; 2) be worth the investment; and 3) support legal compliance requirements.

Are AI-enabled SOCs in use today? What sort of growth trajectory do you expect?

Herold: I'm not aware of any fully AI-enabled SOCs, where all the activities are controlled by AI. However, there are growing numbers and types of AI tools currently being used. Plus, more vendors are introducing tools as time goes on.

Other things will affect AI adoption as well -- namely, as more IoT devices are incorporated into organizational networks and when 5G is deployed more widely. Also, more broad and specific data protection laws and regulations are going into effect, most of which require a significant amount of activity logging, tracking of personal data and being able to quickly determine where data is located. This will cause SOC AI use to become more prevalent.

I expect to see the SOC-supporting AI tools to consistently increase in use, with more tools and vendors offering a similarly consistent introduction to the market. Such tools are using AI to analyze activities to automatically identify previously unknown attacks -- basically, zero-day threats that may be deployed before traditional tools can identify, or even be made aware of, these new threats.

Rebecca HeroldRebecca Herold

Rebecca Herold is an IEEE member and CEO and founder of The Privacy Professor, a consultancy she established in 2004. She has over 25 years of systems engineering, infosec, privacy and compliance experience. She is also the co-founder of Simbus LLC, an infosec, privacy, technology and compliance management cloud service founded in 2014. Herold serves as an expert witness and was an adjunct professor for nine years, while also building and managing her businesses.

Herold has authored 19 books, dozens of book chapters and hundreds of published articles covering infosec, privacy, compliance, IT and other related business topics. Herold is a member of the NIST Cybersecurity for IoT Program development team, supporting the development and application of standards, guidelines and related tools to improve the cybersecurity of connected devices and the environments in which they are deployed. Herold serves on the advisory boards of numerous organizations. She led the NIST Smart Grid Interoperability Panel privacy subgroup for eight years and was a founding member and officer for the IEEE P1912 Privacy and Security Architecture for Consumer Wireless Devices Working Group.

Dig Deeper on Security operations and management

Networking
CIO
Enterprise Desktop
Cloud Computing
ComputerWeekly.com
Close