alphaspirit - Fotolia

Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Marcus Ranum chats with Juniper Networks’ Chris Hoff

Liquid computing may be a pipe dream. But context and deployment remain the biggest information security challenges, says Hoff.

The steady march toward automated technology, whether it’s accomplished by self-driving cars, self-protecting data or software-defined networking, is designed to streamline complexity and lessen the potential for human error.

“We’re starting to see real implementations of network functions virtualization and software-defined networking across both service providers and enterprises,” says Christofer Hoff, vice president and CTO of security at Juniper Networks Inc. in Sunnyvale, Calif.

A 20-year security veteran, Hoff has worn the CISO hat as the director of enterprise security at a $25 billion financial services company. Prior to Juniper Networks, Hoff served as the director of cloud and virtualization for Cisco Systems’ security technology business unit, Unisys Corp.’s chief security architect and Crossbeam Systems’ chief security strategist.

An early proponentof virtualization and cloud security, Hoff is a founding member and technical advisor to the Cloud Security Alliance and founder of both the CloudAudit project and the HacKid conference. Marcus Ranum caught up with Hoff to discuss the changing security landscape as “liquid computing” concepts, in which data and workflow follow users from device to device, and “pipes” or connections remain fluid, present new information security challenges.

Marcus Ranum: Last time you and I were arguing about something, I was trying to convince you that virtualization was a bad idea. Obviously, I rather conclusively lost that one. And I was wrong; it hasn't emerged as a full-scale security disaster -- yet. So, now they’re talking about ‘liquid computing’ and ‘liquid networking.’ What do you think?

Chris HoffChris Hoff

Chris Hoff: Threat models and context notwithstanding, any new disruption, innovation or operational model is bound to bring out the cynics in security wonks like us, because frankly it’s really not about the technology, it’s how it’s implemented. Virtualization and cloud can absolutely be leveraged to deliver fantastically well-secured architecture, or incredible security and privacy disasters, depending upon how they are put into practice. My favorite axiom is: If your security sucks now, you will be pleasantly surprised by the lack of change should you embrace the cloud.

In reality, you weren’t really wrong about virtualization -- you were just being the canary in the coal mine. Maybe we’ve been lucky, or maybe we’re getting better?

To your point regarding liquid computing or liquid networking -- I assume you’re really referring to the capability to rapidly, flexibly and securely deliver workloads (services and applications) and provision and orchestrate the physical and virtual networks that they ride upon in a very agile and ‘fluid’ way, yes?

Further, as I understand the context, this includes the trends of moving more security into the application- and information-layers themselves, and … things like encryption in transit, in use and in motion with a consistent and abstracted policy language that defines and implements security up and down the stack? Let’s get real.

I think this model is being adopted by developers and converged teams who are fundamentally attempting to drive security deeper and more holistically into the lifecycle, and across the operational model. I also think these teams are using platforms -- like public clouds -- where the infrastructure and security band-aids one might rely upon in a datacenter with equipment you own and operate, simply don’t exist. So they must invest in securing their applications and information, because the capability doesn’t present itself elsewhere.

Now, the real challenge is that those who are doing this are generally very sophisticated. They understand threat modeling; they have developers and engineers who are accountable for and understand security and privacy; they have invested in a software development lifecycle; and they take a very proactive approach toward security as a measure of service and quality. That’s very rare.

The way we implement security falls generally upon a spectrum the looks like this:

Network-centric >  application-centric à information-centric > host/OS-centric.

Depending upon what’s available to you in terms of capability, and how fresh the blueprints and design patterns are, as well as the overall application architecture, one generally spreads security out across these delivery vehicles. Today we couch that in a convenient phrasing described as defense in depth. Frankly, this is mostly a copout to disguise the fact that we suck at securing applications. There lurks the real, raw and very difficult truth that actually securing the thing that matters most -- information --  is really hard.

Most people, especially if they consume and then transform information that they don’t create or own end-to-end, are going to apply band-aids. We know how much those hurt when they get ripped off.

Ranum: It seems like liquid computing is reliant on digital rights management, which could be problematic because DRM systems seem to get broken fairly quickly. I keep hearing people talk about ‘self-protecting data.’ How do you see that playing out? Will liquid computing wind up being an ‘inside the firewall’ phenomenon? A corporate network might be liquid within its own perimeter. ... Uh, I can see some horrible play on words looming: ‘Liquid within its pipes?’

Hoff: I think, as an industry, we have toyed with various implementations of ‘DRM’ across the last couple of decades. We’ve leveraged encryption, tokenization, obfuscation and role-based access control as ways of dealing with the lifecycle of information -- especially given how distributed end-user computing has become -- and it’s just a very difficult thing to control.

Ranum: The liquid networking idea makes more sense to me -- as a solution for cloud providers and advanced data centers or large enterprises. Like virtualization, it ought to be secure enough as long as the management/control plane can be kept separate from the data plane. Historically we haven’t done a very good job of that though, which was my main complaint about virtualization. Do you think we’ll get it right with liquid networking?

Hoff: I think I might disagree that the control and data plane separation has been a disaster across the board -- there have been some spectacular flameouts as well as stellar implementations using this approach. And, frankly, it’s given birth to entire industries that have scaled because of this model. However, like I said previously, context and deployment matters.

I would agree, however, that this notion of centralized ‘management’ or control planes requires a very deep understanding of security models, because as I think about what you’re implying, the concerns about putting all of one’s eggs in one basket -- and having them owned -- is a very real issue.

That said, when we get back to your liquid networking concept, we’re starting to see real implementations of network functions virtualization and software-defined networking across both service providers and enterprises. This is being done as a functional reaction to how quickly applications and services can be deployed. So being more ‘liquid’ is really about getting the network and security components out from being blockers of agile application and service delivery.

There are some really interesting capabilities being unlocked here, enabled by virtualization and cloud design patterns, that make security easier, more effective and efficient when it’s abstracted from the siloed and ‘meat cloud’ [Amazon Mechanical Turk] operational models we’ve used to date.

I already see opportunities to do things we haven’t done before -- but I also see mistakes being made. I think the future depends upon being able to detect those mistakes and recover from them quickly … and not repeat them. Sounds familiar, no?

Ranum: It does indeed sound familiar. I used to yell at people and say, ‘You don’t understand security if you don’t do configuration management!’ The trend seems to be configuration automation now. In order to build these massively scaled systems and networks, we’re working beyond the ability of humans to actually do the detail-oriented thinking. Or am I wrong about that? Do you see a bright future of self-assembling and self-diagnosing systems and networks? Or is it going to be more of the same: a buggy mess?

Hoff: If we just look at this purely from a connectivity perspective, there is simply no way the meat cloud can scale. The sheer volume, velocity and variety of things that are interconnecting to one another means that networks must be automated, orchestrated, managed and troubleshooted with as little human intervention as possible. However, as we also know, connectivity (or more specifically availability) and security are often at odds with one another, so this means that the network, and the security apparatus connected to it, needs to be part of this automata complex.

We’re already seeing the ability to gather, parse, enrich and then correlate all sorts of data from the network, security controls, platforms and applications … and do stuff with it. I think this means we can actually get rid of a lot of problems from both an availability and a security and privacy perspective by eliminating human error.

This, in turn, will free up humans to do what they do best and focus on the things that matter most. This will take time, but we’re already seeing the art of the possible demonstrated by services like Amazon, Facebook, Google and Netflix that have very few dedicated network and security operators but represent some of the largest scale [operations] we’ve seen in networking and security. This comes down to simplification and automation at its best.

Ranum: One of the things I keep hoping we will finally see as automation becomes more pervasive and necessary is an end to the ‘penetrate and patch’ model. I have more or less given up on the idea that software will ever be attack-resistant out of the gate, but ... well, you tell me: Do you think the future of fielding applications is going to look like app stores or software as a service or something else?

Hoff: Resilience is hard. I think there’s an opportunity to really improve quality. Software -- secure software -- most definitely plays a huge part in that. If we think about security as a ‘thing we do’ versus a functional component that should be measured, appropriately applied and managed to achieve better quality of availability, privacy, experience and service levels -- and operational practices like DevOps incentivizes and enables measured improvements -- it will drive a set of behaviors and technology that will change how we think about what we do.

Penetrate and patch, followed by patch and pray, really have no place in modern-day software engineering, which means they have no place in the way in which we assure security or privacy. I think we’re getting to the point that the notion of security is emerging from a vocation to a profession, and this is really driving behaviors and methodologies that reflect an engineering mindset. It doesn’t mean that we’ve gotten there yet, or that we’re able to simply disregard years of technical, operational, political or cultural debt, but I do believe we’ve seen the art of the possible … and it’s achievable.

We’re still dealing with aging protocols and practices. And these Internet monoculture vulnerabilities such as Heartbleed, Shellshock and POODLE indicate that we do a pretty poor job of dealing with the half-life of vulnerabilities even when the exploits are in the wild … so we’re going to have to look for motivation outside of breach fatigue, and we’re going to have to definitely lean on more automation to discover and remediate these vulnerabilities.

Ultimately, I also think that we need to consider a refined approach to securing our assets that involves more techniques across the active-response continuum. I’ll leave this little hand grenade to punctuate our conversation, because it’s a doozy. Maybe you’ll ask me back to explain that later.

About the author:
Marcus J. Ranum, chief security officer of Tenable Security Inc., is a world-renowned expert on security system design and implementation. He is the inventor of the first commercial bastion host firewall.

This was last published in April 2015

Dig Deeper on Virtualization security issues and threats

PRO+

Content

Find more PRO+ content and other member only offers, here.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchCloudSecurity

SearchNetworking

SearchCIO

SearchConsumerization

SearchEnterpriseDesktop

SearchCloudComputing

ComputerWeekly

Close