Digital security could learn a lot from engineering's great disasters.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Watching man-made failures on The History Channel's "Engineering Disasters," I realized lessons learned the hard way by mechanical, structural and chemical engineers easily can be applied to those practicing digital security.
In 1931, extended rain breached levees on China's Yangtze River, killing 145,000 people. The Chinese government's flood relief efforts were hampered by the Japanese invasion, and later, civil war. The levees had been built decades earlier by amateur builders, mostly farmers protecting their lands.
This disaster showed the weaknesses of defenses built by amateurs, for which no one is responsible. It also showed how other security incidents can degrade recovery operations.
In 1944, a natural gas fire devastated part of Cleveland, killing 128 people. Engineers built a gas tank that failed when exposed to liquefied natural gas' extreme cold; nearby structures were torched when the leaking gas ignited. Engineers weren't aware of the tank's failure properties, and no defensive measures were in place to protect civilian infrastructure.
This disaster revealed the need to (1) implement plans and defenses to contain catastrophes, (2) monitor to detect problems and warn potential victims and (3) test all equipment to be sure it operates as expected. Today, liquefied natural gas tanks are surrounded by berms capable of containing a spill and are monitored for indications of problems; also, buildings stand far from tanks, just in case.
In 1981, a walkway in the Kansas City Hyatt Regency hotel collapsed, killing 114 people. A construction change approved by the "structural engineer of record" resulted in an incredibly weak implementation. Cost was not to blame; a part that might have prevented the failure sold for $1. Lack of oversight, poor accountability, broken processes, a rushed build and compromise of the original design were at fault. Sound familiar?
This disaster suggests the value of assigning an individual with top-level accountability for enterprise security--your own "security engineer of record." If he is unwilling to put his stamp on the network, it could indicate intolerable problems. If he stamps a plan and massive failure from poor design occurs, the engineer is held responsible and appropriate actions can then be taken against him. The Hyatt's two engineers of record lost their licenses.
In 1993, a massive sinkhole swallowed and killed two people in an Atlanta Marriott hotel parking lot. A sewer drain designed and built decades earlier for above-ground use had been buried 40 feet under the parking lot. A "safety net" retrofitted under the lot was supposed to provide security by giving hotel owners time to evacuate the premises if a sinkhole developed. Instead, the safety net masked the presence of the sinkhole and let it enlarge until it was more than 100 feet wide, exceeding the net's capacity.
This disaster demonstrated the importance of operating a system within its original purpose, and how some products may introduce a false sense of security and/or unintended consequences.
Disasters are tragic, and the only possible good they provide is the opportunity to learn and prevent future catastrophes. Digital security engineers should not overlook these opportunities, either.