rvlsoft - Fotolia

Manage Learn to apply best practices and optimize your operations.

Building a secure operating system with Roger R. Schell

The 'father' of the Orange Book has first-hand knowledge of the standards required for classified computer systems and the issues with subversion.

This article can also be found in the Premium Editorial Download: Information Security magazine: Growing data protection risks and how to manage them:

Roger R. Schell is an authority on high-assurance computing and spent more than 20 years in the U.S. Air Force before working in private industry. As one of the lead authors of the U.S. Department of Defense Trusted Computer System Evaluation Criteria (TCSEC), known as the Orange Book, Schell has first-hand knowledge of the standards required for classified computer systems. Published in 1983 by the National Computer Security Center, where he served as deputy director, the TCSEC was replaced in 2005 by an international standard, the Common Criteria for Information Technology Security Evaluation.

The co-founder and vice president of Gemini Computers Inc., Schell led the development of the Gemini Multiprocessing Secure Operating System, known as GEMSOS. In 2001, he founded Aesec Corp., which acquired Gemini Computers and its security kernel in 2003. He also served as the corporate security architect at Novell.

Marcus Ranum spoke with Schell, now a professor of engineering at the University of Southern California Viterbi School of Engineering, about the security practices of the U.S. government, the National Security Agency's A1 class systems -- Gemini was one -- and the development of a secure operating system. Is it even feasible at this point?

Editor's Note: This Q&A has been edited for clarity and length. 

I cursed the Trusted Computer System Evaluation Criteria as a young pup, and it took me 20 years to work my way around to realizing it was way ahead of its time. Are you frustrated to see how computing has been moving downhill in terms of trustworthiness?

Roger R. Schell: It certainly has been disappointing. I suppose that with 22 years of military experience, I learned that 'you fight with what you have,' and I tend not to get frustrated personally. I try to hope for better outcomes and to influence a better direction.

It appears that the U.S. is putting backdoors in all kinds of systems, while ignoring the potential for people doing it to us. How do you see that trend?

Schell: I don't know if you saw or remember the article I wrote about 40 years ago, on the electronic Air Force and how computers were its Achilles' heel? I asserted that the primary problem we had to address was subversion. Colin Carter at IBM Research characterized that paper as the seminal paper in information warfare. I described, in sort of broad brushstrokes, what people today would recognize as Stuxnet. I and others saw that was what people had to address when things really mattered.

I remember that paper; it made an impression on me, along with Ken Thompson's "Reflections on Trusting Trust" [Communications of the ACM, August 1984], which you no doubt recall. It seems that there was a brief period of time where people were concerned with trusted software distribution and controlling releases. But now, what have we got? Automatic download is built into everything.

Schell: I think there are different factors: There are the issues of government policy and then there are the issues of businesses. I did an interview on long research a couple of years ago and noted that there are huge vested interests that don't want to have those issues successfully addressed. I won't say that that's the cause, but -- taken together -- I think it's very hard to get a focused effort to address the problem.

That was very delicately put. I've phrased that as 'The computer security industry doesn't exist to build secure systems; it exists to get customers' money.'

Is there any particular moment that made you realize that trust and integrity were important to systems?

Schell: I got introduced to the issues of subversion early in my life. My father was a migrant farmworker with a fifth-grade education, the son of immigrant parents. We came without a lot of resources, so we made do with what we had and creative ways of accomplishing our ends.

One of my first experiences with subversion was when I was in grade school. We had a teacher we did not particularly appreciate, so we stuffed a raw potato up his exhaust pipe. It had the usual effect: He'd drive for a mile or so, his car would stop and he'd try to figure out why, and after a bit it would go again. That was my first introduction to subversion–he'd look for all the usual things, and yet it didn't work properly. It was a question of the need for integrity throughout the system.

One of the hardest things for me to understand, at one point, was that computers tend to fail at single points, so we can diagnose a failure in terms of what has changed. Subversion -- induced failure -- breaks that model badly.

Schell: In the town where I was in high school, when Halloween night came, for two years running, I turned off all the streetlights in the town. That was a case of where what happened wasn't obvious, and the second year I was surprised that they let it happen again.

Not that I want you to confess to anything, but … this would have been before smart grid, so wouldn't you have had to gain access to some physical switch somewhere?

Schell: That's what you would think -- and so did they. I could watch the power company building, and they ran around checking all the usual things to see what the problem was. But it turned out that towns at that time had gotten tired of having lamplighters go out and turn on the lights, so there was an electric eye that would turn the lights on when it got dark. I took a can, strapped a few batteries around the side of it, and put a little light bulb inside, and put the can over the electric eye.

Years ago -- I guess it was right before 9/11 -- I was part of an expert review group [tasked with] making recommendations to the government regarding security. And I floated the idea that maybe the government needed its own operating system. I know that idea is familiar to you.

Is the code mass necessary to produce a secure operating system now so large that it's impossible to develop one without it being subverted from its inception?

Schell: When we began looking at trusted operating systems, that might have been the case, but now it's not. A fact of science that people are not willing to accept is that it's not possible to build a secure system without a secure operating system. If people would accept that one fact, it would dramatically change how they approach the problem. But people–including the government–have spent millions of dollars trying to demonstrate otherwise. Of course, they are never going to succeed.

My counterexample is always subversion: If people have subverted the operating system, the whole system is vulnerable. When I first encountered the problem of subversion in a military context, it was a system -- I was the system engineer for Southeast Asia -- in which they wanted to connect intelligence information to the operational information. Being in the Air Force, I was concerned with making that information available to pilots that were getting shot down when they could have been avoiding hot spots.

They had never previously allowed a computer to make that electronic connection between intelligence and operations, but they understood that, in order to do that, they had to control both the computer and the operating system. To do so, we continued with components built all the way from the hardware up to the software–the entire stack. We delivered that controlled interface system, called Seek Dawn, which was highly effective. But to solve the problem we had to recognize that the issue was subversion.

The mass of stuff that's deemed necessary nowadays makes that impossible. Where's my graphical interface?!

We can never win the 'penetrate and patch' game, because we know that where there's a hole, there will always be an uncountable number of holes that remain.
Roger R. Schellprofessor of engineering, University of Southern California

Schell: At that time, that's what we believed. I think that the technology, which you would know of, reflected in the Orange Book as 'the security kernel,' would later dramatically change that, [including] the mandatory access control [MAC] policy, which the military calls Multilevel Security [MLS]. I can enforce that with an operating system kernel that is really quite small, and we've done that successfully. We have built and fielded half a dozen kernel-based systems that have run for decades in the face of nation-state adversaries and have never had a reported security patch -- ever. We know how to do that.

Meanwhile, the industry has shifted to a model of 'patch it and patch it and patch it and patch it.' Eventually, we will build a silk purse out of a sow's ear if we use enough silk thread.

Schell: Consequently, you have not built, in my view, what the Orange Book would call a secure operating system. You end up needing to prove this negative assertion that there are no security flaws in the system.

We have done the opposite of proving the negative: You have to build the system in a different way. The reference monitor [security kernel] changes that; the 'ah ha!' moment was when we created the Formal Security Policy Model and changed that negative assertion into a positive assertion and were able to verify that assertion as part of the design of the system. That is the linchpin.

The class A1 requirements for the Orange Book don't just address the question of how to build and evaluate the system; they address subversion. There are three divisions in the Orange Book: A, B and C. Those were distinguished by the threats we address. Division C is the amateur hackers, essentially all of what security is trying to address today. Division B is the problem of malicious applications -- Trojan horses -- and to assume that the operating system was correct. Division A is explicitly aimed at the question of 'How do I build an operating system where I am confident that there are no trap-doors?' We succeeded in doing that and built several of those and fielded them.

I was at Trusted Information Systems in the early 1990s, when they were working on Trusted Mach and Trusted Xenix [Unix operating system with MLS] and looked at porting my firewall code to Xenix, slammed up against covert channels, and that was as far as I got. Nowadays, I can't see anyone thinking about that stuff; it's all overt channels.

Schell: We can never win the 'penetrate and patch' game, because we know that where there's a hole, there will always be an uncountable number of holes that remain.

There is a myth that has persisted that says, 'OK, you might be able to build a system that is trustworthy, but you can't do anything practical with it.' There are those that don't want to solve the problem -- not everyone in the industry does -- that's the challenge. You expressed that, yourself, in a general sense a month or so ago when you described folks living in 'Orange Book la-la land.' I'd be interested in what made you believe the Orange Book crowd is off in impractical la-la land?

Wow (floundering) I've been saying that on and off since the late 1980s. The commercial industry doesn't allow that kind of system to happen. It's not that the systems shouldn't be used; it's that everyone is conditioned [to think] that they don't want to use it. They will always come up with excuses.

MLS seemed to be decent, but I remember all I ever heard was pushback on the problem of multi-domain browsing. People complained about having two windows open -- the horror! All of security is in la-la land; we keep having this fight about what we want versus what we need.

Roger R. Schell, professor of engineering, USCRoger R. Schell

Schell: I disagree with several of those points. The totally commercial Multics system [dockmaster.navy.mil gateway], which had MLS in it and was used by several places, was very successful. There was no hue and cry about the MLS getting in the way. It was what we would now call software as a service. In the Pentagon, it was used as the primary data processing system for 15 years. Everybody used it, and it provided that MAC enforcement effectively. I don't understand where your empirical evidence is that we can't build a practical system.

You're right about that. Since I've been stuck in the commercial space, I see the Orange Book as sticking an oar in the ground and saying, 'Here is where we'll stand.' I've been on the wrong side of that my entire career.

When I started with firewalls, I had people from the trusted computing systems world coming over and saying, 'Firewalls are stupid. It is a fundamentally bad idea because it isn't trustworthy.'

I kept having to say, 'Yes, but you're going to get a firewall. That's what's going to happen.'

Schell: It's a question of how the system enforces the policy. Maybe there doesn't need to be a policy that says, 'The browser can't access the picture viewer,' but there does need to be a policy that says, 'If I am going to bring my own device to work, I need to separate those security domains.' I don't need to separate my email from my pictures; I need to separate my internet traffic from my corporate traffic. Those are MAC policies we know how to enforce with high assurance.

Today's trend seems to be to do computing at the granularity of the system -- we have all these virtual machines running at 'system high.' It's an endless battle for better than nothing.

Schell: Maybe that's the problem. As long as we keep letting people think they've got something, we're the old patent-medicine salesmen. We think it's going to help, but if we weren't selling patent medicine so effectively, perhaps instead we'd be doing solutions that matter.

Look at what we can do: We now have concrete demonstrations that I can build a class A1 Linux system. You talk about it being out of date; I completely disagree–I was a development manager at Novell for five years and we had a class C2 system. Yes, a class C2 system has to keep up with moving technology because you have to redo the evaluation for every new platform.

Having a class A1 system, it turns out, is much, much easier. A class A1 system was moved from a '286 to a '486 in a matter of weeks. The reason is because if you look at the strict layering that is required for a class A1 system, it's processor-independent. You can tell deductively, 'These are the only modules I have to change.' You have already built all the arguments that tell me things are unchanged, so long as I don't touch the critical parts of the system.

The pieces you need to touch for an upgrade are a much, much smaller part of the system, and it means that, in actual practice, class A1 is easier to keep up to date than class C2.

You're right.

Schell: We know how to do this stuff. Take your example of the power grid: Various people have pontificated about that and highlighted that as a particular area of attention. We know how to dramatically improve the security of those kinds of industrial control systems. If we look at where most of those vulnerabilities are most effectively mitigated, they're in programmable logic controllers, and PLCs are easy to secure: They don't have to run user-provided code, and they don't need a user interface. I can build a high-assurance PLC as a straightforward application of off-the-shelf products and software, and straighten out what's critical to the critical infrastructure and what's not, using high-assurance MAC in the underlying operating system. We know how to do that. I've done that. Yet nobody is interested in doing it.

Why? Is it a case of the adequate competing with the good?

Schell: I'm not a sociologist, and I can't give a definitive answer, but there are a lot of vested interests involved. I talked to some of the manufacturers of PLCs and other embedded systems, and one of the CEOs said, 'The path is littered with the corpses of those who thought the government cared about security.' Then he said, 'I don't intend to join them.'

Next Steps

Learn more about secure systems design

How operating system evolution will change enterprises

Dispel seven myths about secure software development

This was last published in October 2017

Dig Deeper on Alternative operating system security

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

2 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

Is the computer industry doing enough to build secure systems?
Cancel
"The totally commercial Multics system [dockmaster.navy.mil gateway], which had MLS in it and was used by several places, was very successful. There was no hue and cry about the MLS getting in the way."

I think this claim is a bit optimistic.  Switches were built into the system that allowed administrators to decide whether multi-level security was turned on or turned off.  A handful of government customers enabled it; the commercial customers and the remaining government customers left those capabilities off, which is why they stayed out of the way.
Cancel

-ADS BY GOOGLE

SearchCloudSecurity

SearchNetworking

SearchCIO

SearchEnterpriseDesktop

SearchCloudComputing

ComputerWeekly.com

Close