There’s been a lot of talk lately about Web. 2.0 --Web applications that facilitate sharing, collaboration and...
user-managed design, such as social media, blogs and wikis -- greatly expanding the threat landscape. The first time I heard this, I didn’t take it seriously because it was made by someone outside of information security. However, as of late, fellow information security professionals have begun to make the same or similar assertions. Frankly, the threat landscape has not expanded because of Web 2.0.
Web 2.0 may represent another attack vector, but the same old threat landscape exists. Even without Web 2.0, technology still is highly vulnerable to threats and attack. Humans make technology. As much as we want to be perfect, we are not. Sure, companies can embed quality checks into technology; however, the dynamic life of technology makes it hard to match quality 100 percent of the time.
Case in point, the non-profit Open Web Application Security Project (OWASP) is doing a fantastic job of evangelizing secure coding. It’s working to a degree for those organizations willing to invest in training their developers, but such organizations are rare. Secure coding as a core competency is absent in the developer community. If developers are in a hot industry such as banking, working in an organization that must meet PCI requirements or that’s suffered a security breach and privacy sanctions, then secure coding may be a part of the software development lifecycle. But even if the developers code securely, consider the upstream chance that someone will not patch a server the application is hosted on or has added the dreaded “any any” rule to your firewall. The weakest link has always been humans.
Consider the fact that attackers typically take the surest path of exploit. If Web 2.0 did not exist, attackers would target the vector offering the greatest critical mass. For example, appliance-based technology (e.g. SSL VPNs or application delivery controllers) is ripe for exploitation when we consider it is built on open source technology and freely available to anyone who wants to use it. However, it takes a bit more effort and expertise to abuse the access gained once an exploit has succeeded. There will always be new attack vectors; information security professionals should expect it.
Looking at the threat landscape from a service-oriented architecture (SOA) perspective, attackers build on the existing threat landscape by reusing Web 2.0 as an additional attack vector. Attacks over port 25, 80 and 443 are commonplace in Web 1.0 technologies. Attackers reap the benefits of attacking traditional Web services and have taken that knowledge to use against Web 2.0: iFrame, code injection and cross-site scripting (XSS) attacks.. The black hat community draws from lessons learned in writing exploits against Web 2.0 technology. One of the biggest lessons is exploitation is possible when defense in depth is rote as opposed to rational. Rational defense in depth will consider layering defenses from at least two perspectives, thereby creating a mesh of defenses that are difficult to defeat. Rote defense in depth is a checklist you can show your auditors; a look beneath the hood will reveal the absence of technology tuning and in some cases, disabling of features that are integral to a strong defense posture.
An example of rote defense in depth is the now infamous Google hack where criminals launched whaling attacks to gain access. The attack is labeled “sophisticated” because it used encrypted channels to hide its presence. Since at least 1999, firewall technology has provided protocol inspection to defeat tunneling of protocols, but some networking and information security professionals have been led to believe protocol inspection either breaks applications or slows down network traffic. Networks that have been sized correctly with data flow analysis will rarely run into problems leveraging protocol inspection.
Ultimately, Web 2.0 is here to stay, but it hasn’t radically changed the threat landscape. We’re still dealing with the same fundamental threats – fallible humans and old flawed technologies. Rational analysis is best to determine the right defenses.
Ravila Helen White is the director of enterprise security and architecture at a company in the Pacific Northwest. Prior to that, she was the head of information security at The Bill & Melinda Gates Foundation and drugstore.com. Send comments on this column to email@example.com.
Dig Deeper on Web Application and Web 2.0 Threats-Information Security Threats