Network traffic collection, analysis helps prevent data breaches

Government agencies were among the first to retain mountains of network traffic data, but large banks, financial firms and healthcare companies are following in an effort to respond to alerts generated by intrusion defense systems and speed digital forensics in the event of a breach. Steve Shillingford, CEO of Solera Networks, Inc. says his company's appliances collect and store network traffic, allowing administrators to search and navigate through it as easy as searching through files on a computer. Getting companies to focus on remediation has been a challenge. In a recent survey conducted by the vendor of more than 200 people belonging to organizations of at least 1,000 network nodes, Solera Networks found that their technology remains relatively unfamiliar. While 92% said capturing and recording all network traffic is important to network forensic capabilities, only 28% said they were very familiar with network forensic solutions. In this interview, Shillingford says the technology is ready for prime time.

Another issue of concern right now is sensitive data flowing through virtual environments. Is it possible to collect and store that traffic?

SearchSecurity.com:

To get security news and tips delivered to your inbox, click here to sign up for our free newsletter.

We're still very early on the hype curve with VM consolidation and while I think it's a tremendous opportunity for customers to cut excess cost out of their environment, you have to do it with your eyes open. When you've consolidated physical servers into one big server you lose some of that visibility that your network management and network security tools provided you. We know this is a problem because you see network management vendors out there, including VMware themselves and other players, looking to provide more visibility inside the black box. It's debatable whether or not we've seen our first hypervisor attack, but it will happen. We're still in the early days there and I think there still needs to be some best practices for it to be better understood. We have a product called a V2P Tap that essentially does what a network tap does. It takes copies of packets as they cross a certain network segment and regenerates them out to a different destination. You can collect all the VM traffic that resides on a certain host and point it out via an open interface to an existing toolset. We're supporting gig speeds in the network in VM, but right now we see networks that are exceeding that. We're seeing 10 gig in enterprise class deployments but those 10 gig lines are being only 10% utilized. We hope to support 10 gig lines in the future. 

If you are collecting and storing all of this network traffic, is it possible that you are going to be collecting Social Security Numbers, credit card information and other sensitive data?
We would potentially be collecting sensitive data. We do implement the system so that you can take an active configuration step and eliminate that kind of traffic from being captured so it never crosses our wires. Also, we've also designed the architecture of the system to essentially be invisible on the network. From a technical standpoint, we don't have a traditional TCP/IP stack and if you were a hacker on the network you wouldn't see us because we're not identified as a network device. How does your company differ from a standard compliance auditing vendor such as nCircle or Alert Logic? Do you compete head to head with those vendors?
We're not in the compliance market. My experience with e-discovery and compliance companies is that they're really interrogating system level repositories such as file servers, databases and email systems. They do a very nice job of aggregating that data into a single point where people can actually make some sense of it. Our compliance is a little higher than that. You might be able to delete a file out of a file server and you may even be able to delete a payable out of a database, but if you are capturing at a network level, you can't delete the packets off those systems. That's where we provide that last line of defense. I don't see us being used to provide evidence in a lawsuit but I do see us used in a case of an online marketer who has been breached to determine how many credit cards have been exposed. 

I understand the benefit of collecting and storing network traffic in the event of a breach, but what is the benefit of using the data prior to a breach?
I think there is a level of frustration and maybe dissatisfaction in the current strategies around prevention and remediation and I see this as a natural evolution to people realizing that forensics is going to be a critical component to an overall critical posture. … A bank always has a component of prevention and remediation and there's almost always a surveillance or incident response component. We think that's been a tremendous under-allocation in the network security world. In any rolling seven day window you can find a high profile breach in almost any corner of the world. It's impossible to secure your network 100%. It only takes one incident to cause a massive breach. From a budget perspective the incident response piece hasn't been invested properly. … One of the most common use cases is for our customers to discriminate between the important alerts and the ones that are just noise from their intrusion defense system. There's a tremendous signal to noise ratio. They can take that alert and integrate it back into our system and get a time slice. Then the administrator can decide whether it's something that they need to allocate more time on or if it's a systematic alert that doesn't present a great risk. One of the findings from the survey is that it takes 2-10 days from the point of a breach to determine the scope of an incident for more than half of those surveyed. If you have this system in place, can that be improved?

We have many customers who go through this. They are presented with a slew of potential alerts on their dashboard. Normal course is to go do some log correlation and try to go through all your network systems to pinpoint the IP ranges in affected systems. In one particular, very complex, thousand node system, this on average per incident took about 53 hours when you factor in the human time and system time. Now imagine being able to say we've got an alert at 2 p.m. on Friday that we think something weird was happening with the system, let's not only rewind to 2 p.m. on Friday, let's rewind to 1:45 p.m. and see what happened before that incident. In this particular case, in a large national lab, it took them three hours. They were able to take 53 hours down to three hours. Are network administrators tied to your proprietary tools to conduct analysis?
There is a wealth of reporting and dashboard type tools both commercial and open source and their all widely used. What we thought we could bring in terms of innovation and underlying value is being able to capture actual packets and data at high rates to large time periods. That's the hard part. If you think about your DVR, we wanted to be able to capture the content but then allow you to hook up any television you want to watch the actual recordings.

Dig deeper on Monitoring Network Traffic and Network Forensics

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

SearchCloudSecurity

SearchNetworking

SearchCIO

SearchConsumerization

SearchEnterpriseDesktop

SearchCloudComputing

ComputerWeekly

Close