It’s been 20 years since the first major security-related disruption of the Internet, the Morris worm, hit the worldwide network. The natural reaction to anniversaries like this is to look back and say: Look how much has changed since then. But in this case, the more appropriate response would be: Why haven’t things changed?
In what was the first comprehensive analysis of the Morris worm, Gene Spafford of Purdue University, writing just four weeks after the worm’s release, outlines the framework of what has evolved into the longest-running debate in the security community: the disclosure debate.
On November 8, the National Computer Security Center held a hastily-convened workshop in Baltimore. The topic of discussion was the program and what it meant to the Internet community. Who was at that meeting and why they were invited, and the topics discussed have not yet been made public. However, one thing we know that was decided by those
present at the meeting was that those present would not distribute copies of their reverse-engineered code to the general public. It was felt that the program exploited too many little-known techniques and that making it generally available would only provide other attackers a framework to build another such program. Although such a stance is well-intended, it can serve only as a delaying tactic. As of December 8, I am aware of at least eleven versions of the decompiled code, and because of the widespread distribution of the binary, I am sure there are at least ten times that many versions already completed or in progress — the required skills and tools are too readily available within the community to believe that only a few groups have the capability to reconstruct the source code.
Many system administrators, programmers, and managers are interested in how the program managed to establish itself on their systems and spread so quickly These individuals have a valid interest in seeing the code, especially if they are software vendors. Their interest is not to duplicate the program, but to be sure that all the holes used by the program are properly plugged. Furthermore, examining the code may help administrators and vendors develop defenses against future attacks, despite the claims to the contrary by some of the individuals with copies of the reverse-engineered code.
Looking at Spafford’s arguments now, you can see tenets that proponents of full disclosure still use to argue their position. I doubt this was his intention at the time, and I’m not sure where Spafford even stands on the disclosure issue these days, but the fact that this argument still hasn’t been settled to anyone’s satisfaction is sad and endlessly frustrating. Security researchers are so sick of the topic that a lot of them won’t even discuss it anymore, and even a lot of guys in the vendor community have just accepted the fact that there’s little they can do to influence the ways in which vulnerabilities and exploit code are disclosed. I’m interviewing Spafford on Thursday during our Information Security Decisions conference in Chicago and I’m going to have to bring this up, much to his dismay, I’m sure.
There are several other sections of Spafford’s paper that are eerily prescient, as well, including a footnote describing a tactic that would come to be standard operating procedure for attackers:
A devious attack would have loosed one version on the net at large, and then one or more special versions on a select set of target machines. No one has coordinated any effort to compare the versions of the worm from different sites, so such a stratagem would have gone unnoticed. The code and the circumstances make this highly unlikely, but the possibility should be noted if future attacks occur.
Sound familiar? By the way, if you’re wondering what ever happened to Robert Morris himself, he’s teaching computer science at MIT. How’s that for coming full circle?