Published: 28 Nov 2008
How is it that in spite of all the work we've done, all the research we've performed, all the best practices we've developed, all the clever technology we've created and all the awareness we've raised, important systems vital to daily life are still suffering fatal security failures?
A researcher recently announced that SCADA systems actually are pretty easy to hack, and it would be a bad idea to connect them to the Internet. Well, this doesn't come as any news.
We've also learned that an access control technology used for the London transport fare card system is vulnerable to attack. Equally unsurprising, a July research report from a group at UC Santa Barbara describes multiple concerns about the integrity of voting machines from two different manufacturers. According to the group's carefully reasoned document, "knowledge of basic security concepts, their application, and defensive programming practices should be prerequisites for the developers of critical systems such as an electronic voting system." This idea undoubtedly seems like common sense to virtually all readers of this magazine, yet the fact remains that a huge amount of highly vulnerable stuff continues to appear. Why is this happening?
Ultimately, such failures are at least partly our fault. We still have not gotten across the message that security can only be performed by competent experts who have put years of effort into learning their craft. Complex and ill-conceived technology that fails to stand up to even moderate levels of attack keeps getting built. The planners don't take into account the need for security specialists. The system designers have no clue that security protocols and encryption implementations will fail if they are not subjected to a lot of independent review by specialists.
They will never know any better until our profession convinces them that we have a valuable specialty that needs to be applied. The same old security mistakes will be made by each new generation of architects and coders. This doesn't have to happen. Doctors, accountants and lawyers have all established themselves as professionals, and established a need for their services. Why have we not done this? Maybe our communications methods are wrong.
Security researchers at small consultancies, the hacker fringe, book authors and prestigious universities are just voices calling out in the wilderness. Sure, the latest stupid vulnerability makes for great blog fodder and trade press articles, but the overall impact of rubbing vulnerabilities into the face of the IT industry has been mixed at best. We have not yet found an effective way to communicate that security is important, difficult, and can't be left to amateurs.
The biggest challenge for our profession is not dealing with consumerization of IT or an ambiguous perimeter. It isn't about deciding whether to describe ourselves as risk managers or information assurance specialists, or about carefully devising and following best practices. Rather, the biggest challenge--and the largest contribution we could make--is to establish ourselves as a recognized profession providing essential expertise that must be taken advantage of when developing complex digital systems.
We've done this within lots of enterprises, so I know we can do it. We're starting to do it within service providers and we're getting continuously more influential there. What we need to do now is break out of the IT shop and start influencing the critical infrastructure makers. Until we crack that nut, the world is going to be stuck with a fragile infrastructure.