Marcus Ranum: Q&A with clean-slate pioneer Peter G. Neumann

Marcus Ranum, security expert and Information Security magazine columnist, goes one-on-one with clean-slate luminary Peter G. Neumann of SRI International and formerly Bell Labs.

This article can also be found in the Premium Editorial Download: Information Security magazine: Compliance and risk modeling:
Peter G. NeumannPeter G. Neumann

Marcus Ranum caught with up with security pioneer Peter G. Neumann, Ph.D., who at age 80, is still a thought leader in the industry, and the moderator of the Association of Computing Machinery (ACM) Risks Forum. The principal scientist in SRI International’s Computer Science Lab, where he has worked for 41 years, Neumann is heading up research on computer system security platforms with security researcher, Robert N. Watson of Cambridge University’s Computer Laboratory, based on the Pentagon’s Defense Advanced Research Projects Agency’s (DARPA) Clean-slate design.

In the 1960s at Bell Labs, Neumann worked on the Multiplexed Information and Computing Service (Multics), a mainframe operating system, developed in conjunction with Massachusetts Institute of Technology and General Electric, which served as a “timesharing utility” for numerous sites and academic institutions. Multics was the first operating system to use rings of protection or privilege levels to control access. 

Marcus Ranum: Peter, I've been a fan of your work for most of my professional career; and it's a great pleasure to interview someone who, I've always felt, was far ahead of me all that time.

Peter G. Neumann: Well, I'm far ahead of you in time, at least, in that my first computer job was in the summer of 1953.

Ranum: Back in 1999, I suggested that we scrap all the software that runs the Internet and code something better from scratch, then blame it on Y2K. The idea of junk-it-all seems blindingly obvious to a few of us, but why doesn't it take hold with everyone else? You'd think after a decade-plus of malware—following a decade of viruses—people would realize that reliable systems are more cost-effective than unreliable systems, plus all the [junk] you need to make them semi-reliable. What's going on?

Neumann: “Obvious” and “everyone” are interesting words, here. This has, indeed, been obvious to you, me, and most of our favorite colleagues for a very long time….Some major dichotomies are at play here: for one, far-sighted research versus the desire for quick-and-dirty, palliative solutions with short-term profits. Secondly, recognition that everyone is a stakeholder when it comes to having systems that are safe, reliable, secure, and predictably trustworthy versus optimizing for stockholder dividends. And third, the understanding that trustworthiness is inherently complex versus an almost ubiquitous quest for simple solutions that are foolproof and easy-to-use.

The simplistic solutions are fundamentally inadequate, although the Voltaire dictum, “The best is the enemy of the good,” is often used to justify things that are nowhere near good enough for the uses to which they are put. Besides, no system can ever be truly perfect; especially, when we consider such threats as denial-of-service attacks, insider misuse, and all sorts of out-of-band stuff. Thus, talking about the “best” is, more or less, irrelevant; and “good” is still, likely to be inadequate.

Ranum: A lot of people sure do love their Android and iOS devices; and that's a whole new operating environment. So, we've seen that the consumer-base can be shifted overnight, at the rate of millions, to a new operating system, or usage metaphor. Do you see that as promising? Every time I suggest to people that they scrap Windows, the answer is, “It's too embedded in our environment.” But we've seen the whole Unix-based computer world, more or less, collapse into Linux and Sun; and nobody has died from it—except for folks who held onto their SGI stock for too long. It can happen, will it happen again?

Neumann: I think it is already beginning to happen, although it may occur only in step functions as innovators incrementally recognize the huge, long-term payoffs. There are many application areas in which what appears to be just barely good enough is doomed in the long run to be much less than meets the eye: easily compromised mobile devices, laptops, desktops and servers; our critical national infrastructures in part simply lashed together from available system components; the cloud-computing Kool-Aid bandwagon, in which you trust third- and fourth-parties, many of which are hidden from view with unknown side-effects and questionable accountability; human inability to avoid compromising situations such as Web-based malware; overreliance on system administrators to compensate for flawed systems; the almost complete lack of beginning-to-end trustworthiness in elections; and so on, seemingly ad infinitum. The new birth in small operating systems is refreshing, but, still, not good enough. Overall, a gigantic wake-up call is needed for everyone including developers, customers, government agencies and mass-market vendors. However, we continually hear the myth that some sort of disaster might stimulate increased awareness; unfortunately, that usually results in more palliative fixes, rather than fundamental changes.

Some of the clean-slate system and network architecture projects currently [underway] are definitely steps in the right direction. The special issue of the IEEE Security & Privacy magazine (November-December 2012) devoted to “Lost Treasures” is particularly relevant here. It looks at past efforts worth reconsidering. That is a step forward; even if it might seem like a step backward to some folks. For example, we are contemplating building some sort of mobile device platform on our trustworthy system, among other prototype applicationsenabling better security than is currently possible.

Ranum: Mobile devices have no need to support add-on peripherals. There are a lot of security cracks related to having operating systems that allow the end user to plug in a USB device. Not having support for a giant slew of abstract devices must go a long way toward keeping user’s code out of kernel space. Part of your current research is on operating system design. How do you address that sort of problem in a secure operating system?

Neumann: It is not just operating systems that are critical here. Without better hardware, the operating systems are still vulnerable. Without secure operating systems, the applications are still vulnerable. Without compilers that can hide much of the complexity for developers who are not coding wizards, application programmers are still likely to create bad programs. So, everything needs to be considered as total-system problems. This is especially true of applications such as election systems and life-critical systems. Robert Watson's Capsicum [developed at the University of Cambridge’s Computer Laboratory and funded by Google] is a hybrid capability-based operating system that allows legacy code to run alongside secure applications without interference. But it would be much stronger on hardware that is better suited to security. As a consequence, our current work [a joint project between SRI and the University of Cambridge] addresses hardware as well as software.

Capsicum and our current work show that clean-slate architectures need not throw away everything and start from scratch, but rather that there are some evolutionary paths, if we can constructively build bottom-up from better hardware architectures.

With that in mind, our current clean-slate projects are developing and using new tagged/typed capability-based hardware with fine-grained access controls, hypervisor/separation kernels that are capability aware, operating systems and programming language extensions that are also capability aware, and applying them to the development of trustworthy servers, network switches and switch controllers.

What is really unusual here is the overall far-sighted approach: The field-programmable gate array (FPGA) based hardware is formally specified in a high-level abstract modular/typed language [Bluespec] that compiles into Verilog for loading into FPGAs. The test suites are, in part, formally generated. And we have built SRI's formal methods into the tool chain, so that we can actually have formally analyzed hardware specifications, as well as low-level software with proven properties. After talking about doing things like that since the 1960s and 1970s, we are finally able to do it. Robert Watson and I, together with a wonderful team, have written two early papers that might be of interest to people, who are wondering about some of the details…with all research and development intended to be open. I know it sounds heretical and countercultural, but thus far it appears to be working.

I am reminded of Fred Brooks, [a grad school colleague and co-author of a 1957 paper on the composition of music on the Harvard Mark IV] who came up to me at the Spring Joint Computer Conference in Atlantic City in 1967. He stated quite emphatically that as an old information theoretician, I should realize that Multics was impossible. He was alluding to the fact that Multics could interrupt on each typed character, apparently not realizing that the independent I/O controller co-processor had no need to do that very often in real time. I suspect many skeptics will read what I have said, and come to other conclusions—that existing hardware is perfectly adequate; or today's operating systems are good enough (poor Voltaire!); or formal methods are still too labor intensive; or that we must be out of our minds. But perhaps surprisingly, it all seems to be coming together. After the first two years, we already have the capability hardware running on FPGAs with FreeBSD and Internet accessibility, and the beginnings of formal analyses of the hardware specification.

Of course, there are no easy answers, and we are not claiming that we are developing a better mousetrap—and certainly not the best one. However, the opportunity to take DARPA Program Manager Howie Shrobe’s, “Suppose we got a do-over, what would we do differently” seriously, is a real blessing…. Indeed, his Clean-slate Resilient Adaptive Secure Hosts (CRASH) and Mission-oriented Resilient Clouds (MRC) programs seem likely to produce some results with considerable potential to advance the state of the art significantly, at least, as existence proofs and demonstrations of what is possible, with considerable potential for some of Howie's projects to move into the mainstream.

Ranum: I completely agree with you that it's a problem that has to be tackled across the board. I think computer security has suffered from a marketing myopia as each vendor offers a single complete solution to security, and we later discover that, of course, it isn't. I'm actually pretty comfortable with the idea that humans could build a really reliable operating system if they wanted to—but what about programmers? How do you stop the guys who write garbage? I don't mean code that's got detail flaws, but rather flaws like forgetting to authenticate?

Neumann: We are all fallible. We need architectures, software methodologies, programming and systems that are more tolerant of human foibles and better able to detect vulnerabilities and flaws. How about starting with better system requirements before implementation? Better use of software engineering disciplines? Better programming languages that make it much harder to write bad software? Better evaluation of failures? How about principled, system-oriented, ongoing education rather than just learning how to work around riskful features in programming languages? On the other hand, principles are by themselves inadequate without experience in their application, because they are not universally applicable without considerable thoughtfulness and care.

Similarly, what are called “best practices” are typically another example of good, but not good enough; they are inherently incomplete. Certification of programmers is a slippery slope and can degenerate into lowest-common-denominator just-able-to-meet low-bar standards. Liability for seriously flawed software is another slippery slope, because blame is, in reality, usually attributable to widely distributed sources throughout the entire system life cycle. Once again, there are no easy answers; it is usually the total-system nature of our problems that must be addressed, rather than ad-hoc compositions of point solutions. However, that is difficult, because to a person with a particular tool, that tool begins to look like a universal solution—even though it is addressing only a very small part of the problem.

Ranum: One last question—and that's regulation. What do you think the government's or legal system's roles need to be in all this?

Neumann: We need some thoughtful government regulation, incentivization and impartial oversight. Free enterprise seems to have failed miserably when it comes to security of critical systems, although it has been wonderfully successful with respect to new features. The lack of integrity in our election systems is a horrible example of that failure, where the blame is, of course, widely distributed. Although we seem to have done much better on trustworthiness involving reliability, much of that work has ignored security, which may eventually come home to bite us.

There are other seemingly desirable solutions including some that involve Congress and law enforcement, both of which, unfortunately, seem to be in need of serious improvements in their technological expertise. Especially, when it comes to appreciating that some of the problems are international in origin—which makes U.S. laws difficult to enforce—and that corner cases abound, all of which must be anticipated. Beware of what you ask for because the overkill may backfire on us all.

About the authors:
Marcus J. Ranum, chief security officer of Tenable Security Inc., is a world-renowned expert on security system design and implementation.

Peter G. Neumann, Ph.D., is the principal scientist in SRI International’s Computer Science Lab, where he has worked for 41 years. Prior to SRI, he spent 10 years at Bell Labs in Murray Hill, N.J., (which included working on the Multics operating system from 1965 to 1969) and a year teaching at University of California, Berkeley. Neumann is the moderator of the Association of Computing Machinery (ACM) Risks Forum, a Listserv on network security.

Send comments on this column to feedback@infosecuritymag.com.

This was first published in May 2013

Dig deeper on Software Development Methodology

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchCloudSecurity

SearchNetworking

SearchCIO

SearchConsumerization

SearchEnterpriseDesktop

SearchCloudComputing

ComputerWeekly

Close