Face-off

Is There Strategic Software?

This article can also be found in the Premium Editorial Download: Information Security magazine: The power of SIMs for visibility and compliance:

Marcus Ranum

Point In February, Dubai Ports World tried to buy several major U.S. ports from their British owner, but the deal was scotched over concerns Arabs might not keep them adequately secure. Never mind the U.S. couldn't keep them adequately secure.

Fast-forward a few months and Israel-based Check Point Software Technologies tries to buy U.S.-based intrusion detection systems provider Sourcefire, but the deal is quashed over aftershocks from the Dubai Ports fiasco. Never mind that U.S. government agencies can't keep their networks sufficiently secure to begin with; questions were raised as to whether Check Point should control a piece of software that is widely used in U.S. government networks.

Is there such a thing as "strategic software?" Of course there is. But a better question to ask would be: "Hasn't the horse already left the barn on that issue?"

The truth is, if your software controls your computer (and it does), then the person who writes the software also controls your computer. Does that have strategic implications? Ask the European Union, which in the past has voiced dismay over the fact that virtually all of its computers are controlled by software from a certain company in Redmond, Wash. A cynic could see the Check Point-Sourcefire deal as the U.S. government getting a taste of its own medicine.

I'm a cynic, but not amused. If we accept the idea that software can have strategic implications, it would make a lot more sense for the U.S. government to be thinking in terms of a "strategic software reserve"--kind of like our "strategic helium reserve"--rather than killing a single high-tech acquisition. It's patently ridiculous to worry about Check Point owning Snort when Check Point already owns Zone Alarm, a widely used personal firewall that completely controls a Windows computer's TCP/IP stack and processes.

And why worry about the Israelis owning an IDS company when Canada's Research In Motion owns all the BlackBerry handhelds' communications that government bureaucrats simply can't live without? Remember the panic a few months ago when RIM threatened a shutdown because of a patent dispute? Seems the U.S. government was afraid it would be unable to send "sensitive but unclassified" messages to Canada and back. If that communication is strategic, it looks to me like there are a lot of barn doors in need of locking.

The governments of the world have adopted high tech without thinking of it as a weapon. For all the Department of Defense pundits who talk incessantly about "information-centric warfare," they completely avoid thinking about software as a weapons system in its own right--but every important major weapons system today relies on software. I wonder if it's simply too difficult a problem, and everyone has preferred to shut his brain off and say, "Horse? What horse?"

Given the level of denial about the issue, it was silly to single out Check Point and Sourcefire. My guess is that if it were possible to even understand the situation, most of us would be terrified--if we allowed ourselves to be that paranoid. Perhaps this is just one of those problems that we'll leave for future generations to unravel. And, oh, did I forget to mention that key components of Microsoft's ISA firewall were written by one of the company's development teams in Israel?

Bruce Schneier

CounterPoint If you define "critical infrastructure" as "things essential for the functioning of a society and economy," then software is critical infrastructure. For many companies and individuals, if their computers stop working, they stop working.

It's a situation that snuck up on us. Everyone knew that the software that flies 747s or targets cruise missiles was critical, but who thought of the airlines' weight and balance computers, or the operating system running the databases and spreadsheets that determine which cruise missiles get shipped where?

And over the years, common, off-the-shelf, personal- and business-grade software has been used for more and more critical applications. Today we find ourselves in a situation where a well-positioned flaw in Windows, Cisco routers or Apache could seriously affect the economy.

It's perfectly rational to assume that some programmers--a tiny minority I'm sure--are deliberately adding vulnerabilities and backdoors into the code they write. I'm actually kind of amazed that backdoors secretly added by the CIA/NSA, MI5, the Chinese, Mossad and others don't conflict with each other. Even if these groups aren't infiltrating software companies with backdoors, you can be sure they're scouring products for vulnerabilities they can exploit, if necessary.

On the other hand, we're already living in a world where dozens of new flaws are discovered in common software products weekly, and the economy is humming along. But we're not talking about this month's worm from Asia or new phishing software from the Russian mafia--we're talking national intelligence organizations. "Infowar" is an overhyped term, but the next war will have a cyberspace component, and these organizations wouldn't be doing their jobs if they weren't preparing for it.

Marcus is 100 percent correct when he says it's simply too late to do anything about it. The software industry is international, and no country can start demanding domestic-only software and expect to get anywhere. Nor would that actually solve the problem, which is more about the allegiance of millions of individual programmers than which country they happen to inhabit.

So, what to do? The key here is to remember the real problem: current commercial software practices are not secure enough to reliably detect and delete deliberately inserted malicious code. Once you understand this, you'll drop the red herring arguments that led to Check Point not being able to buy Sourcefire and concentrate on the real solution: defense in depth.

In theory, security software programs are after-the-fact kludges because the underlying OS and apps are riddled with vulnerabilities. If your software were written properly, you wouldn't need a firewall--right?

If we were to get serious about critical infrastructure, we'd recognize it's all critical and start building security software to protect it. We'd build our security based on the principles of safe failure; we'd assume security would fail and make sure it's OK when it does. We'd use defense in depth and compartmentalization to minimize the effects of failure. Basically, we'd do everything we're supposed to do now to secure our networks.

It'd be expensive, probably prohibitively so. Maybe it would be easier to continue to ignore the problem, or at least manage geopolitics so that no national military wants to take us down.


Please send your comments on this column to feedback@infosecuritymag.com

Coming in November: Do federal security regulations help?

This was first published in September 2006

Dig deeper on Security Industry Market Trends, Predictions and Forecasts

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

SearchCloudSecurity

SearchNetworking

SearchCIO

SearchConsumerization

SearchEnterpriseDesktop

SearchCloudComputing

ComputerWeekly

Close