Published: 01 Apr 2002
Q: A former colleague of yours at NSA calls you the "den mother" of intrusion detection research. Did you always want to be an engineer?
A: No. I thought I would do something in medicine, but I was diagnosed with epilepsy in my adolescence and was told that no medical school would ever touch me.
My grandfather was principal of an elite girls' schools in Tokyo, and my mom went to birthday parties at the Imperial Palace. They lost everything in the war, and she was a war bride. My dad was a self-educated Teamster from a classic Alabama dirt-farmer family. I was one of seven kids raised in Birmingham. Jimmy Hoffa established a scholarship fund, and I was a recipient.
In my senior year of high school, while neurologists discussed how to classify my disability, I won the Betty Crocker award for Alabama, which included a scholarship. In 1973, I was the only woman to attend the University of Alabama at Birmingham in engineering.
When did computing enter the picture?
At Alabama. I took my first course as a freshman on a monster IBM mainframe with 1 MB of memory. I did the punch card scene, doing Fortran and COBOL. I loved math, but a math career just seemed too risky. That's why I went into engineering.
I was teaching an engineering lab when a couple of Xerox technicians said, "Come work for us. We're under the gun for affirmative action. With your background, you're heaven-sent." I may have been heaven-sent, but I wasn't warmly welcomed. I remember the guy across the desk at Xerox saying, "I guess we have to hire you, since you passed the test." I stayed with Xerox for five years as a specialist repairing copier machines, while taking night courses in math and economics.
Issues of race and gender bias permeate your career. How would you say those factors affected your work and life?
I wouldn't have had that initial job at Xerox if they hadn't hired me under affirmative action, so it's a wash in the long run. But it's certainly a two-edged sword. At times, I think people discriminated on the basis of gender when it wasn't acceptable to do it on the basis of race. Sometimes customers would raise a ruckus for having to deal with me because they believed they had been given "second best" when I showed up, even though I was better educated than most of the men.
For a woman, to be average is an invitation to be maligned. From a personal point of view, you can write it off. But from a functional point of view, it damages the organization. It damages your ability to be a mission player, and it hinders the organization from picking the best person for the right spot because they have to cater to such idiocy.
When did you go to NSA?
I moved to Baltimore with Paul, my husband, and took courses at night. I found a job I loved, running a data processing shop for a civil engineering firm; my husband went to work at the NSA. He said, "Remember those people at school wandering around, whose shoes didn't match and would walk into a wall if you gave them a stick of gum to chew? This place is crawling with them. You've got to come here." So I did. That was 1984.
What helped you realize your potential?
When I got involved in security in 1989, Gene Spafford was the gold standard. At the University of California at Davis, where we were doing a brain trust sort of project, I felt almost embarrassed talking to Spafford because my academic career was so checkered. I spent eight years taking tons more courses than I needed to get an undergraduate degree. It turns out Spafford had every bit of an eclectic background as I did. The ways I was different from the mainstream turned out to be pluses, not minuses.
Spafford said that when you don't connect with a bureaucracy, everyone assumes that the problem is that you have too little to offer. The
Worked for the National Security Agency in various positions.
Led the NSA's Computer Misuse and Anomaly Detection (CMAD) research program and helped build the Information Security Research and Technology Group.
Received the NSA Distinguished Leadership Award for building the CMAD community.
Served as technical monitor for the Intrusion Detection Expert System (IDES) and Next Generation Intrusion Detection Expert System (NIDES) research program at SRI International.
Served as deputy security officer at the Los Alamos National Laboratory's Computing, Information and Communications Division.
Cofounded and became president/CEO of security consultancy Infidel Inc.
Authored Intrusion Detection (McMillan Technical Publishing, January 2000).
Appointed as a venture capital consultant at Trident Capital.
problem is more often that you have way too much. That made me feel better about my skills and potential.
How did you make the jump to IDS research?
When my son was diagnosed with autism, a friend at the National Computer Security Center (NCSC) said I needed a job that didn't involve so much travel. She had a project that pointed toward initial intrusion detection work. I looked at what they're doing and thought, I may be an idiot, but this is the only thing we were doing that makes sense to me.
I challenged a manager to let me to run with the IDS project. We had funding locked in, so I said, "Let's roll with it, and I'll come to you with a strategy for either straightening it out or bringing it down."
I didn't always see eye to eye with the bureaucratic way of doing things, so I actually got on the telephone with stakeholders-that was radical-and looked around to see who was doing work in this space. I started to forge relationships and connections. That's how I got it done.
What was the state of intrusion detection when you began working on it?
Jim Anderson, my mentor, came up with the concept of intrusion detection around 1980, after which he had a Frankenstein experience-"Oh my God, what did I create?" He saw that it was such a sexy notion that it overshadowed preventive measures. The systems were more specialized and sophisticated than a lot of what I see in the commercial world today, and those systems were deployed and then abandoned because people lost interest.
Dorothy Denning and Peter Neumann did a study in the mid-1980s and Dorothy wrote a seminal paper, "An Intrusion-Detection Model," in which she described what is still the de facto model. We haven't even gone 40 percent of the way along the path she described. The government did several prototypes and the Air Force was cranking up for another round-these were the days of Haystack and before the days of the network security monitor.
What do you think of the move toward "intrusion prevention?"
There's a lot of hype and a lot of vision. As Stephanie Forrest, who researches immunology and intrusion detection at the University of New Mexico at Albuquerque, found, any automated response that does detection, decision-making and correction needs to be done relatively low in the network stack, way down at a very fine-grained level. The lower in the stack it occurs, the subtler the correction mechanism can be.
There's an analogy between some things we know about medicine and what we do in computer security. Some of the lessons learned doing things like chemotherapy apply. You can conceptualize or model certain correction mechanisms, but the correction is too crude if you do them at too coarse a level. You end up with all kinds of revenge effects that create more problems for the whole organism.
Where is it all headed?
It's splitting as the network evolves and becomes ubiquitous. Monitoring and detection capabilities reside in and permeate the stack from the coarsest to the finest grain. At the finest grain, it's easier to make generic rules about what should happen. In those situations, you're in a better position to make self-corrections. But anything can suffer when you start dealing with things on more granular levels-for instance, at the packet flow and routing level. At those points, you lose some of the differentiation between functions that occur for security reasons and functions that occur for quality-of-service reasons.
People get all hyped about doing protocol checking and correction of malformed packets. Good network gear does a little of that packet scrubbing as a part of routine network management. Now you're in a situation where you accommodate the fact that that may happen more often than you might expect from a strictly statistical point of view. In so doing, though, you're saying that you don't care so much about who did it or where a bad stream of traffic is coming from. But the bottom line is, you accept that you have to worry about a denial of service, not that some 12-year-old in Peoria has decided to inject frag packets into the network. That ultimately helps a lot. Scrubbing packets isn't going to give you a problem in terms of burying you in false alarms and creating noise levels, where people can bury stuff.
From a detection point of view, we'll get more powerful about being able to pull back and take a look at larger patterns and see subtler patterns of behavior. That puts us in a kind of brave new world of intrusion detection. Right now, we're still implementing signature recognition in a klugdey way. There's a lot of improvement yet to be done. Commercial forces will drive most of this improvement, not someone in DARPA.
You once said, "I don't believe that anyone in defense circles, which are at the root of a lot of what we know about security, could ever have foreseen the impact of the World Wide Web. Some folks in defense were blindsided by the whole notion of distributed systems."
That's right. The people in the ranks knew that security was going to be a headache, but I don't think they understood to what degree. They were dealing with trusted network interpretation (TNI) and the Rain- bow series. I was supposed to be taking the Orange Book's principles and extrapolating them to networked systems. It turned out to be the nature of networked systems that you have an erosion of security with each additional system you add to the network. At that point, however, people were just beginning to grapple with the idea that they had sensitive, unsecured data.
Was that frustrating for you?
To a degree, but I'm willing to beat my head against the wall only so much. Then I go off and start laying the groundwork for something that will solve the problems.
I cobbled together people who had at least a partial view of what was wrong. That connected me with Tsusomu Shimomura, Matt Bishop and Dan Farmer-people I regard very highly. I systematically worked my connections, and serendipity helped a lot. So I cobbled together a community-a fast-growing community-of good people.
What do you mean by "cobbling together a community of good people" and "serendipity?"
My partner at Infidel, Terri Gilbert, says that serendipity is what happens when you consciously make a piece of yourself available-things do converge. It's amazing how things converge over time.
Terri recognizes the importance of what we're doing. She said, "You know, the whole notion of how to secure this stuff once it's automated really is the problem of this generation." I think she's right. We have to get real about it. Trust is central. Information security is a context in which we can define these critical human and community concepts in a way that matters. Something this important can't take place outside of a community with a mission.
I learned a lot about how to do it just growing up. In the Japanese community, there are about three degrees of separation between people, and if you want to get something done, you'll use that awareness. Coming from a small town, it was natural for me to rely on my community for support. And it also feeds into the law of large numbers: If enough people vote on a particular outcome, you'll reach general convergence pretty quickly and that convergence will be nicely centered on the correct answer.
We understand that there are powerful ways of counteracting these big, hairy problems. If you apply enough people with a few criteria at the beginning, convergence will begin immediately. You may not find the needle right away, but you'll eliminate the three-quarters of the hay-stack that's not productive.
You've said that you had no delusions about the capabilities of the government side of the fence because commercial superstars can do some things far better. Like what?
I recently keynoted an investment conference for a Wall Street firm and said that they shouldn't invest in "techie toy" firms anymore. We're at a critical juncture in the life cycle of security products-either they have legs, brand loyalty or show signs of maturing. So instead of sitting back and saying, "We're smarter than those customers; we know better than they do what they need," we should actually query customers about what they need.
We have had a tremendous amount of hubris in the security field. We've said that because we're the security gurus, we don't have to know anything about how customers interact with their systems. That's so untrue. If you don't understand your value proposition, you're screwed.
We also have to get a better a sense of the context in which security operates. We tend to get so enchanted with content that we forget about context. It's all about context. It's critical to integrate products with the users-the human side-with the underpinnings, the network and platforms; and with the business itself, corporate policy and bureaucracy. Gaps in any of those will give you problems in functionality, security and liability.
And now you're working in the VC arena. How are you plugging this new career direction into your work as a researcher?
Building new firms is great fun. It's returning to my old venue, but from a different angle. I love seeing new ideas. Instead of being a failed bureaucrat, I'm a startup person who was stuck in the wrong slot.
Before, I was at a juncture between research and implementation. I was basically in applied research. A lot of smaller firms that used to have their own R&D gave it up in the '70s and '80s because they couldn't justify prototyping something for a couple of decades before moving it to market. There are only a few ivory tower places where you can do pure research, and I had a basic, practical, farm girl mentality. Applied research is what you have to do at startups, and that's what I'm doing for the VC firm. You're giving people legs, allowing them to take research that may not have been high-risk research but was more like systems research, where you take an isomorphic approach and apply it to a whole new problem set.
The other part that's refreshing is that it forces me to do technology transfer, but really grow tech abilities beyond anything one can do in government. The downside of government is that you can do this stuff and not have to actually produce. Here I don't have that luxury. Because everyone is on that same page, you wind up doing a lot. There's no debate about the level of pressure required to make you produce.
About the author:
Richard Thieme is a contributing editor for Information Security. He writes, speaks and consults on the human dimensions of technology and the workplace.