What's your take on the alleged HealthCare.gov security issues? How egregious are they, which technologies or implementation
methods are to blame and what lessons are learned for enterprises?
Ask the Expert
Have questions about application security for expert Michael Cobb? Send them via email today! (All questions are anonymous.)
One of the worst HealthCare.gov website issues that has come to light is the one discovered by Ben Simo, a software tester in Arizona. Simo found that it was pretty straightforward to gain access to users' accounts by resetting their HealthCare.gov password. Some very basic mistakes in the coding of the authentication and password reset pages meant that even someone with rudimentary knowledge of website code could successfully hijack an account.
For example, when a username and password combination was entered incorrectly, instead of saying that the combination wasn't recognized, HealthCare.gov confirmed the existence of a guessed username. Even worse, the password reset code was visible in the page's source code , allowing malicious users to generate their own password reset requests for other account holders. The website then displayed three security questions, which, when answered incorrectly, displayed the account owner's email address. With an email address on hand, hackers can often easily find the answers to the three security questions required for a password reset on social media sites.
While these errors aren't technology based, in the end they come down to poor coding and implementation. As Simo said, "Either the developers were incompetent and did not know how to do the basic things to protect user information, or the development was so fractured that the individuals building the system didn't understand how they fit into the bigger picture."
Big projects, such as the HealthCare.gov website, always tend to have onerous deadlines and unfortunately security is an element of development that often gets dropped when deadlines are tight. Project managers should always have coders with security training working on the authentication functions and components of a website. These coders must understand the concepts of secure design and topics such as threat modeling, secure coding and security testing. They should also be up to date on any relevant regulations that prescribe how personal or sensitive information has to be processed and protected.
Many errors are often a direct result of development teams that are too large and don’t communicate well. It's vital that all code is clearly documented with a description of what the code does, why, how, and what assumptions are made when the code is called. It's also important to list all other code that references a particular component to help ensure changes to the component won't break assumptions and logic elsewhere.
Whenever changes are made, security controls should be revalidated. Wherever possible this should be done by someone independent from the development team so that their review can be impartial. Implementing a structured software development lifecycle can not only provide assurances that all code has passed inspection and testing but also ensures controls are validated before being used in a production environment. Security has to be seen by developers as a feature that will be tested just like any other feature or requirement.
Finally, enterprises must always have a way for users to report problems and ensure that they are dealt with quickly and efficiently. Simo tried to report the HealthCare.gov defect as soon as he found it, but was pointed to law enforcement instead of website managers. Enterprises would be well served to review whether their sites provide effective methods with which visitors can report problems, security-related or otherwise.
Dig deeper on Web Application and Web 2.0 Threats-Information Security Threats
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.