SIEM market overview: Gartner's Mark Nicolett

At the recent Gartner Security and Risk Management Summit 2010, Gartner VP and Distinguished Analyst Mark Nicolett discusses SIEM market vendor consolidation, the myth that SIEM is a cost-saving effort and more.

Read the full text transcript from this video below. Please note the full transcript is for reference only and may include limited inaccuracies. To suggest a transcript correction, contact    

SIEM market overview: Gartner's Mark Nicolett

Mark Nicolett: Initially when we first started covering the segment, the typical buyer
was a very large organization that wanted to stand up a security operations
center. It was a very much network security external threat focused and the
interest in the technology broadened as compliance drivers came into play.
The first broad compliance driver was Sarbanes-Oxley and so the technology
started to be deployed on a fairly widespread basis for privileged user
monitoring. Then PCI came along and broadened the scope of deployments to
pretty much include any company of any size that was handling credit card data. As far
as this being the year of sin, I would say, No, it's been a gradual ramp up
in activity. We are starting to see a progression of use cases or a return,
actually, to some security focused use cases in some geographies outside of
North America.


Mark Nicolett: There are a lot of vendors, I mean we're tracking over 20 vendors, in
this segment. There's a group of larger vendors that are in this space that
have broad portfolios of other products and quite a few point solution
vendors. I've actually been surprised at the resilience of some of the
smaller vendors in terms of continuing on even in the face of competitors
that have grown 10 times the size of some of the smaller point solution
vendors these days. So I felt that it was right for consolidation years
ago. Consolidation has not occurred to the degree that I had expected.
We've seen some gradual, gradual consolidation, a couple of acquisitions
per year. We expect to see a few more this year and the market seems to be
able to support a large number of vendors.


Mark Nicolett: This is not a cost saving exercise by any stretch. This is an exercise
in implementing capabilities that need to be there, from a security
perspective. That will help a company, for example, more quickly detect
that they've been breached, or detect a targeted attack. It represents work
that needs to be done, but is not being done, by an organization today.
It's never an exercise in saving money. Absolutely, if the technology is
successful, there's going to be more resource expended and follow through
in solving issues that are present that the company was blind to, that wasn't
aware of.


Mark Nicolett: The steps are probably not that much different than any other I.T.
project. You need to identify the ultimate goals of the project. Initially
the driver, for example, might be to solve a compliance issue. You also
want to involve other types of stakeholders that will ultimately use the
technology. You want to make sure that whoever's responsible for network
security, the database administration areas, the server administration
areas are involved because, ultimately, you're going to be monitoring the
activities of privileged users in those areas. There's going to need to be
follow up when issues are uncovered. You need to make sure that you understand what the
compliance requirements really are. You need to understand the retention
requirements, reporting requirements, and so on. There's a variety of
stakeholders that need to be involved at the outset of the project, so you
can really understand what the final scope of the deployment will be.
Another aspect of these projects is that, in many cases, there isn't enough
information known about the ultimate event rates that will be experienced
through a full deployment. We recommend a lot of upfront work to enable the
level of audit functions that are going to be required, to generate the
events from each source and to measure the event rates from a
representative instance, production instance, of each source type so that
you can do some extrapolation and you can have a decent estimate of event
rates from different portions of your infrastructure. You need that
because; it's that type of information that has a major influence on the
ultimate design that is going to be required for your environment and it
starts to have an effect on the match of a particular vendor's architecture
to your requirements. Short summary of that is that companies need to know
a lot about event rates and event sources before they go to the vendors to
try and get some sort of a solution proposed.


Mark Nicolett: Yes, I would say that, by and large, customers have gotten benefit out
of deployments. Do they make comments like "it was harder than we expected,
were we surprised with the event rates, did we have to go back and re-
architect?" Yeah. We get feedback like that, but not a high percentage,
where the technology is a failure. Where we do see that, there are issues
around how the project was run. How much was invested in customizing the
technology once it was deployed and so on. So it's not easy, but given the
proper amount of attention and effort, it's been, by and large, a
successful technology.


Mark Nicolett: The sheer volume of events that are generated. If you're off by a
large amount, then you'll have to stop midway through your deployment, re-
design, re-architect, re-deploy. The performance issues also come back
later. If you think about a company that perhaps deployed, initially, for
compliance reporting, and then started to add a little bit of real time
monitoring to get some of the security benefit. At some point they're going
to become more aggressive in using that information store, that's been
built up over time, for a forensic investigation. That is, sort of a
second, late phase, stress point, in terms of performance. Going after a
large event store that's built up over a period of years with ad-hoc
queries. We see that as a performance issue in later phases. In general,
just making sure that you do the process work up front. To make sure that
areas of the organization that will need to respond to an incident that's
been uncovered, are obligated to and expected to. And you've got the
workflow in place to orchestrate that response. Getting all the
stakeholders involved and getting their commitment to support the
mitigation work that's going to be one of the outcomes of a successful


Mark Nicolett: User context. Information about a user, the user's role, the user's
status in the company - the normal patterns of access for that user.
application contexts. Understanding the access model and transaction model
of particular applications so that the technology can be used for exception
monitoring at the application layer. Finally, the addition of data context.
Because the ultimate goal of most targeted attacks is to either steal data
or to compromise an account. You need to have intelligence about data, so
that you can make a determination if some data access is OK to ignore
or whether there's a red flag to be raised. Finally, external threat
context. Information about emerging threats, new attack patterns, lists of
bad actors, IP blacklists and so on. So that you can discern some outbound
communications to some IP address, would be easy to overlook. Unless you
knew that had something to do with botnet control, for example. Then all of
a sudden everything changes. Moving forward, its adding data,
application, user, and external threat context to enable a more effective
exception monitoring.


View All Videos

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.