"Current systems offer little or no protection from viral attack -- the only provably 'safe' policy as of this time is isolationism."
--Fred Cohen, "Computer Viruses: Theory and Experiments," 1984
Funny how the more things change, the more they stay the same. Twenty years after Cohen wrote these words, we still haven't got a clue how to stop viruses, and the state-of-the-art in virus defense remains soft.
I understand it's a difficult problem. Windows has more holes than a sieve. AV scanners are inherently reactive. End users are double-clicking dopes. You can't patch systems fast enough. Budgets are tight. Yada, yada, yada.
If you're a security pro, these explanations make perfect sense. But if you're not, they sound like, well, a bunch of excuses.
For a profession that's struggling to gain respect, credibility and funding, that's not a good thing. You can talk all you want about security's growing role in the business, but it's hard to be taken seriously when you can't solve 20-year-old problems.
One of the reasons security remains a black art is that we've grown accustomed to failure. I'm continually amazed that we're willing to spend buckets of money on something called "antivirus," when, clearly, it's anything but. We've done an expert job managing low expectations, too. When there's a new virus breakout, management doesn't say, "How come we got nailed?" but rather "What's the damage?" They expect security to fail because we've conditioned them to expect failure.
Consumed by everyday reactive security -- cleaning up after virus infections, babysitting IDSes, configuring firewalls -- security pros never have enough time, money or energy to actually reduce IT risk. I've never met a security manager who said he had a sufficient budget. When evaluating information security spending, it makes you wonder if anybody's spending money on the right things.
Even in large companies with progressive security programs, IT risk management is more of a hobby than a discipline. Most companies rely exclusively on qualitative risk assessments, even for systems and exposures where quantitative risk information is available. And, when quantitative data is collected, it may not make it into the hands of the higher-ups for fear they'll overreact.
This is slowly changing. AV vendors continue to develop proactive scanning and filtering technologies, such as behavior blockers and protocol anomaly detectors that "throttle" malware based on network traffic patterns and host behavior. They're also working on generic exploit blocking: Given a known vulnerability (a keyhole), the AV system will scan for types of exploits (the key or keys).
Meanwhile, smart enterprises are cobbling together alternative AV solutions, such as worm-catching honeypots and reverse (outbound) IDSes.
On the risk management front, organizations are warming up to the need for automated security workflow and decision-support software. Eventually, new systems will automatically be categorized in terms of risk exposure; business managers will perform ongoing asset valuations; and standardized messaging formats will allow all this information to be integrated and centrally managed.
While change is afoot, the absence of effective AV software forces us to fight malware with duct tape and bailing wire. And the lack of mature, automated tools and standardized messaging protocols forces us into thumb-in-the-air risk modeling. In both cases, black magic reigns.
Would you be surprised if, in 2024, Fred Cohen's observations about viruses still apply? Neither would I.
Andrew Briney, CISSP, is editorial director of TechTarget's Security Media Group, which includes Information Security, SearchSecurity.com and the Information Security Decisions conference.