At Black Hat Las Vegas and Defcon 2010, several presentations (.pdf) were made on the state of SSL and SSL vulnerabilities...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
Many users and some businesses are overly reliant on SSL, believing it to be a panacea for Web security. Implementing SSL on a website does not protect an organization from all Web security vulnerabilities; it only provides -- in the best case -- an encrypted connection between the client and the server. In this tip, we'll explain why businesses should carefully assess the risks to their environments posed by recently discovered SSL vulnerabilities and what they can do to minimize the risks.
State of SSL: SSL vulnerabilities and attacks
SSL was developed in 1994 by Netscape to enable trusted connections for ecommerce on what was then the emerging communications medium known as the Internet. Since this time, many improvements have been made to SSL, including the Transport Layer Security (TLS) protocol, but there have been many vulnerabilities and attacks against SSL over the years as well. The EFF SSL Observatory Project at Defcon 2010 and the Qualys SSL Labs presentation at Black Hat 2010 both demonstrated security deficiencies in the current state of SSL.
As part of its research, the EFF project collected SSL certificates in use on the Internet and noted interesting behavior by SSL clients and servers. One of the most important discoveries was that a significant number of servers using SSL and certificate authorities are vulnerable to security flaws (.pdf), especially to transparent man-in-the-middle attacks. Many of these servers and CAs are still trusted by their enterprise customers.
The Qualys SSL Labs presentation focused on the cipher options and SSL protocol and weaknesses present in SSL usage on the Internet . One of the most important points of this presentation was that Web browsers probably only need between 10 and 20 root certificate authorities installed to be able to use SSL with a majority of websites, rather than all of the certificate authorities that come by default in Internet Explorer or Firefox. Clever, ambitious attackers could cause major damage across the Web by exploiting less than two dozen CAs since any CA can sign a certificate for any DNS name (or even if the attackers attacked one of the other default trusted CAs), and that's definitely cause for concern.
One of the attacks noted in the presentations involved the use of weak certificates that were generated on Debian Linux systems ; these certificates were generated prior to a patch to OpenSSL that resolved an issue with the weak certificates. Such certificates allow for man-in-the-middle attacks, collision attacks or attacks wherein the hacker is able to generate vulnerable certificates by brute-forcing the certificate authority root key, thereby allowing for impersonation of any certificate. An SSL renegotiation vulnerability was also presented that could allow an attacker to take over SSL connections.
Knowing who and what configurations are vulnerable to attack and what the specific risks are for an enterprise is difficult, especially when factoring in today's complex SSL setups that often include load balancers and wildcard certificates. The threats an enterprise faces are not trivial, but identifying the potential vulnerable systems is possible.
Using the work the above-mentioned research teams put forth, not only can attackers identify an enterprise's SSL servers, but they can also identify the specifics of how SSL is being used on those servers. Once an attacker knows where the insecure SSL servers are, he or she can use that information to launch attacks, such as man-in-the-middle attacks, allowing him or her to view or manipulate SSL traffic. These attacks can also be used against other protocols, so, for greater security , you could extend the studies, using the same methods as EFF and Qualys researchers, to non-HTTP protocols that use SSL, like IMAP, SMTP or others. This offers a more in-depth understanding of the usage of SSL on your servers, can help you understand where insecure certificates or protocols are used for encryption as well as where an attacker could exploit the encryption used in a system.
To ascertain if clients are vulnerable to the exploits mentioned above, check the version of the Web browsers in use on your network through passive traffic analysis or by checking the client systems. This check could help identify what certificate authorities the SSL servers trust by default, and whether they could be used to attack the system or connections it makes. This check could also work for non-HTTP protocols, but may be more difficult to perform since non-HTTP protocols may have different encryption options supported or frequently used.
Enterprise SSL defense strategy
To defend an enterprise and your users against SSL threats, you will need to understand where and how SSL is used in your enterprise. This can be done by using methods like the ones used by the EFF and Qualys, in addition to monitoring network traffic or conducting application inventories on clients and server software used. Once that's done, the first step is to understand that, unless your enterprise has significant expertise in PKI and SSL, using industry standard SSL and certificate configurations will help to prevent significant issues on the server side since it is easy to insecurely configure SSL. On the client side, using the most up-to-date software along with secure configurations, wherein the default trusted certificate authority list is up to date, will help prevent attacks using SSL. If your enterprise has a demonstrated need for complex SSL configurations -- such as using an SSL accelerator or using smart cards that only support certain types of certificates -- or the expertise to manage the complexity, the configurations will need to be carefully managed, since it is easy to enable insecure configurations, as evidenced by the presentations.
One last control that could be used to prevent attacks from potentially rogue certificate authorities involves only enabling -- or whitelisting -- the necessary certificate authorities that are deployed in Web browsers used by your organization. However, this may require significant effort both in terms of discovering what those certificates are and disabling all the others.
You could start with the 17 trusted SSL certificate generation authorities (.pdf) from the Qualys SSL Labs presentation and add additional certificate authorities as needed. Also consider using Extended Validation certificates on your servers; these provide a higher level of assurance than current standard certificates regarding both the certificate authority issuing the certificate and the organization using the certificate, proving to Web browsers that your website is legitimate. However, EV certificates may not be worth the trouble, as they do not change the SSL protocol and have most of the same problems as regular certificates, but they do help users distinguish between different types of certificates and who issued them, and updated high-security Web browsers will display a green bar indicating that an EV certificate is in use.
While SSL security may seem to involve increasing risk, improvements are being made in the SSL/TLS protocols, servers and client systems to protect against the vulnerabilities and exploits. The tools included by operating system and application vendors to manage and support SSL and its supporting systems have been drastically improving , as they are being widely deployed, which is important, as maintaining the security of SSL and related systems is critical to protecting Internet confidentiality. Enterprises should also monitor the discussions regarding which certificate authorities are included by default in Web browser certificate trust lists in order to know if any organizations they do not trust are added.
About the author:
Nick Lewis (CISSP, GCWN) is an information security analyst for a large Public Midwest University responsible for the risk management program and also supports its technical PCI compliance program. Nick received his Master of Science in Information Assurance from Norwich University in 2005 and Telecommunications from Michigan State University in 2002. Prior to joining his current organization in 2009, Nick worked at Children's Hospital Boston, the primary pediatric teaching hospital of Harvard Medical School, as well as for Internet2 and Michigan State University. He also answers your information security threat questions.