Co-authored with Chandu Ketkar
Since 1996 my company has analyzed hundreds of systems -- both big and small -- built for many different purposes. Recently, as security attention has turned to the healthcare vertical, me and my company have been called on to analyze medical devices. This article is a quick overview of what I've seen, covering both our approach and some of our most common findings.
Holistic, risk-based security analysis
Security analysis means many different things to different people. Some believe penetration testing is the way to go, simulating an outside attacker who may know very little about the system. Some rely on standard black-box tools as a benchmark (these kinds of tools make great badness-o-meters, but aren't very good security meters). Some peek under the covers at the code using a static analysis tool to look for bugs (flaws are another story). At Cigital, we on occasion use all of these methods together in a holistic analysis.
In our view, a holistic analysis is best suited for medical devices. That is, we want to understand potential risks at multiple levels and determine their impact on patients, hospitals and device manufacturers. A risk-based approach based on understanding context makes the most sense for a medical device.
A holistic approach can both view a medical device from an attacker's viewpoint and assess the physical, platform, application and communication levels simultaneously. Figure 1 shows how many different levels of security can impact the same device. An attacker can compromise the device using physical-level attacks (e.g., attacks using the JTAG standards for integrated circuit debugging go after port-sharing debug harnesses); platform-level attacks (e.g., tampering with the boot process and/or gaining administrative privileges on the -- usually embedded -- OS); application-level attacks (e.g., exploiting various application-level vulnerabilities to steal personal health information (PHI) or to compromise patient safety); and finally, communication-level attacks (e.g., exploiting weaknesses in communication security with tools that sniff traffic or undertake capture/replay attacks). Any holistic assessment approach should -- at the very least -- analyze threats at all four of these layers.
Any medical device assessment approach needs to be informed by the situational [business] context of the device.
Any assessment approach needs to be risk-based to be cost-effective. In the case of medical devices, our assessments are based on conducing an initial architecture risk analysis to identify critical functional areas from a security viewpoint and then performing lighter (and technically easier, but time-consuming) assessments, such as secure code reviews and penetration testing in a targeted fashion.
Finally, note that any medical device assessment approach needs to be informed by the situational (business) context of the device. Relating technical risks to situational context is critical, because a real context enables an analyst to provide pragmatic solutions to any security problems that are uncovered. For example, in the real world, requiring doctors to log in to a medical device just before starting a medical procedure is a bad idea because they simply won't do it regularly. (So we can't ask them to do it as part of a proposed solution.) Any proposed solutions must take into account situational context and must be actions that can be adopted.
By the way, finding a bunch of problems and writing them down without suggesting fixes is way too common in the security industry. Why identify a security problem if you can't fix it? What good does that do? Our holistic approach suggests solutions explicitly.
Common findings from real device assessments
We've assessed various types of medical devices that together comprise the data set discussed below:
- Class 2 medical devices, including monitors for various implants
- Telemedicine devices used for remote health monitoring
- Specialized devices used for specific medical procedures
Taken together, findings from Cigital's assessments fall into the following broad categories (which we go on to address individually):
1. Cryptographic problems
2. Operational issues with device lifecycle
3. Communications security
4. Authentication and authorization issues
5. Lack of obfuscation controls
6. Physical and platform security issues
Cryptographic problems: In the medical device world, HIPAA regulations govern the handling of sensitive health-related data on a device. Needless to say, these data abound. Complicating the matter, there seems to be no standardization over patient IDs across healthcare providers. Where some providers use unique surrogate keys (e.g., sequence numbers), others use Social Security numbers (!!) as patient IDs. This latter class of devices by definition all store PHI and/or NPPI (non-public personally identifiable) data.
From a security viewpoint, any sensitive data -- including PHI and NPPI data -- must be encrypted both when stored on a device and in transit. We have discovered many cases when such data was not encrypted.
Cryptography may appear to be easy to use, but getting it right is always more difficult than it seems at first blush (Heartbleed anyone?). We have uncovered many instances where either weak cryptographic protocols -- such as DES and MD5 -- were used, or where cryptographic protocols were used with too-short key lengths or otherwise applied incorrectly. Incorrect applications of crypto included, for example, storing a cryptographic key with the ciphertext as one, and keeping an initialization vector (IV) secret or generating static IVs as another.
The issue we found most commonly was a cluelessness about key management. In many cases, our general observation was that the keys and other secrets on the device were never changed and that there was no plan in place to change keys in case of key compromise. Complicating the matter, we also encountered secrets (including keys) hard-coded directly into the application code itself. This is a particularly bad idea, since it implies not only that secrets are never changed, but also that they are the same in the development, quality assurance and production environments (um, not good).
Finally, we observed the use of proprietary encoding protocols far too often. Though we fully understand the business reasons driving the use of proprietary protocols, we always recommend our clients use standard crypto protocols wherever possible and make a deliberate decision to use proprietary protocols only when absolutely necessary. (We also recommend our clients undergo a thorough security assessment of any proprietary crypto protocol they may choose to use against our advice.)
Operational issues with device lifecycle: Retaining strict control of the device lifecycle is necessary from a security viewpoint. In our assessments we found that when devices assigned to patients were returned to some healthcare providers, there were no clear processes to securely dispose of or otherwise wipe clean such devices. In some cases, devices no longer needed by patients were sold off on auction sites such as eBay. In other cases, when devices were returned due to hardware or software defects, they were serviced by engineers as-is, meaning the engineers had access to all device data.
Communications security: We found many issues with communication security that were specific to the healthcare industry. Our findings included insecure Digital Imaging and Communications in Medicine (DICOM) protocol communication (although DICOM standard supports encryption, some devices don't); unauthenticated Health Level 7 (HL7) communication; and insecure XML data handling (especially true for HL7 version 3+ protocol usage).
Authentication and authorization issues: In too many medical device cases, authentication was simply missing. User behavior (e.g., doctors do not like to log in, ever, for any reason) or physical security (e.g., the device sits in the operating room) was often the rationale for a lack of authentication controls. In other instances, device serial numbers were used as the authenticators. Device serial numbers are easy to guess: they follow a guessable pattern and in some cases are simply stamped on the device itself.
Lack of obfuscation controls: Obfuscation techniques can be used to instrument or otherwise transform binary code to create barriers for reverse engineering-based attacks. Related techniques include: anti-debugging controls, tamper-proofing controls and white-box cryptographic controls. Although, the security provided by these obfuscation tools is usually not cryptographically secure (e.g., not always tied to a hard mathematical problem), these tools do create barriers for attackers. We would like to see medical device manufacturers consider the obfuscation toolset as part of their security control regime.
Physical and platform security issues: In general, we found physical and platform security lacking in the majority of analyses. Our security engineers were able to access a physical device without any trouble and locate JTAG ports for hardware attacks. We uncovered many instances of unhardened OSes (e.g., a keyboard device driver was available on the device when it was not really needed). We also observed multiple insecure boot processes.
Where to go from here
Both current reports and data from the BSIMM indicate that the healthcare industry's track record is poor when it comes to security. There are many reasons for this problem. First of all, medical device security is a complex problem. Some of these devices were designed and built when security was not a central concern or when the advanced controls that we know about today were not technically feasible. Some devices have been around for decades, and in many cases, use system software that is no longer supported by the vendors who provided it. Updating all of these devices is an expensive proposition. At the other end of the spectrum, the most modern devices are getting even more connected to the network (a prime vector for attack).
As new and more sophisticated threats emerge, patient safety is in play. We've heard of cases in which an exploited medical device was used to compromise central servers running critical hospital operations.
All the information we cover in this article (and more) was presented live at the second Archimedes Medical Device Security conference held at the University of Michigan in Ann Arbor in May. The conference brings together medical device manufacturers, health care providers, academics, security experts and regulators for real-world conversations about the state of medical device security.
There is plenty of work to be done to secure medical devices. The good news is the necessary work is underway, and very good people are paying close attention to the problem.
Dig Deeper on Secure software development