Modern medical devices increasingly leverage microprocessors and embedded software, as well as sophisticated communications connections, for life-saving functionality. Insulin pumps, for example, rely on a battery, pump mechanism, microprocessor, sensors, and embedded software. Pacemakers and cardiac monitors also contain batteries, sensors, and software. Many devices also have WiFi- or Bluetooth-based communications capabilities. Even hospital rooms with intravenous drug delivery systems are controlled by embedded microprocessors and software, which are frequently connected to the institution’s network. But these innovations also mean that a software defect can cause a critical failure or security vulnerability.
In 2007, former vice president Dick Cheney famously had the wireless capabilities of his pacemaker disabled over concerns “about reports that attackers could hack the devices and kill their owners.” Since then, the vulnerabilities caused by the larger attack surface area on modern medical devices have gone from hypothetical to demonstrable, in part due to the complexity of the software, and in part due to the failure to properly harden the code.
In October 2011, The Register reported that “a security researcher has devised an attack that hijacks nearby insulin pumps, enabling him to surreptitiously deliver fatal doses to diabetic patients who rely on them.” The insulin pump worked because the pump contained a short-range radio that allow patients and doctors to adjust its functions. The researcher showed that, by using a special antenna and custom-written software, he could locate and seize control of any such device within 300 feet.
In a report published by Independent Security Evaluators (ISE) that examined 12 hospitals, the organization concluded “that remote adversaries can easily deploy attacks that manipulate records or devices in order to fully compromise patient health” (p. 25). Later in the report, the researchers show how they demonstrated the ability to manipulate the flow of medicine or blood samples within the hospital, resulting in the delivery of improper medicate types and dosages (p. 37)–and do all this from the hospital lobby. They were also able to hack into and remotely control patient monitors and breathing tubes – and trigger alarms that might cause doctors or nurses to administer unneeded medications.
Regulators are taking note. In June 2013, the U.S. Food and Drug Administrator (FDA) issued a security alert about “Cybersecurity for Medical Devices and Hospital Networks.” While most of these guidelines are focused on security-specific concerns (such as protecting usernames and passwords) they include best practices for embedded software developers. For example, the FDA is concerned about “security vulnerabilities in off-the-shelf software designed to prevent unauthorized device or network access, such as plain-text or no authentication, hard-coded passwords, documented service accounts in service manuals, and poor coding/SQL injection.”
While security breaches are top-of-mind for many organizations, a bigger concern should be software defects. A buffer overload, insufficiently tested error handler, memory leak, or defect that would inconveniently crash a smartphone app, for example, to crash might cause a patient real harm in a medical device.
Implementing an automated software engineering policy can prevent defects from being introduced into the codebase and reduce business risk. But many medical device software makers appear to lack the process visibility that would ensure that the policy is being followed. According to a July 2012 report by the Electronic Engineering Journal, “In medical devices that contain software, it can be extremely difficult to assess if a firm follows their processes for design controls, especially in the areas of validation, risk/hazard analysis, and design changes.” As a result, several coding defects were reported. Furthermore, “some defects were basic violations of software coding practices, while others were new defects that were introduced during the correction of previous defects.”
What can be done? Developers of medical devices need to follow the best practices for safety critical embedded software development, while being aware that increasingly sophisticated functionality (including external connectivity) adds to the threats and the non-determinism of those devices. When a heart monitor or IV pump can be managed and controlled remotely, the risks multiply far beyond simple standalone device complexity.
A helpful guide is ISO standard IEC 62304:2006, “Medical device software — Software life cycle processes.” As with all ISO standards, it is very detailed, covering design, architecture, unit testing integration testing, and more. The best solution: Understand the risks, and follow the guidelines to manage the risk throughout the software development lifecycle. It can be a matter of life and death.
By the way, Parasoft’s static analysis includes several built-in rules for developing FDA-compliant software using C/C++, .NET languages, and Java.
Alan Zeichick is principal analyst at Camden Associates; previously, Alan was Editor-in-Chief of BZ Media’s SD Times. Follow him @zeichick.