Archive for the ‘FDA’ Category

Why Healthcare IT is Not a Game Changer

Monday, February 15th, 2010

Last week I attended the WLSA/Continua Mobile Healthcare Symposium and the opening day of the Continua Health Alliance Winter Summit 2010.  Also, a couple of weeks ago I attended a few of the FDA Workshop on Medical Device Interoperability: Achieving Safety and Effectiveness sessions via a Webcast*.

Since I'm not going to HIMSS in Atlanta this year (starts Mar. 1) I thought now would be a good time to do some venting.

I've talked about HIT problems before, e.g. Healthcare Un-Interoperability and The EMR-Medical Devices Mess. With all of the ARRA/HITECH talk along with the National Healthcare debate raging it made me wonder how the issues facing device interoperability, wireless Healthcare, and HIT in general really fit in to the bigger picture.

After sitting though multiple sessions on a wide variety of topics presented by smart people the obvious hit me in the face:  The complexity of the issues are mind numbing. Everybody has good (and even great) ideas, but nobody has real solutions. Why is it that all this good HIT hasn't translated into meaningful improvements in Healthcare?

For example. At first I thought the talk by Dr. Patrick Soon-Shiong might be heading somewhere interesting.  He presented a well structured view of the current Healthcare landscape that seemed to make a lot of sense. Then he plunged into the abyss with an in-depth discussion of transformational technologies (molecular data mining, Visual Evoked Potentials, etc.).  These developments could potentially lead to improvements in people's health, but we never got to hear how any of the complex Healthcare delivery issues were going to be addressed.

Among his many endeavors Dr. Soon-Shiong is Chairman of  the National Coalition for Health Integration (NCHI). I think the "Zone of Complexity" point of view (see here -- warning PDF) is a good starting point for understanding the position that Healthcare IT is in:

Also, following the diagram above is this statement:

However, currently, even when information is in digital formats, data are not accessible because they reside in different “silos” within and between organizations. In turn, the U.S. health system is hampered by inefficient virtual organizations that lack the mechanisms needed to engage in coordinated action.

The NCHI Integrated Health Platform (grid computing) is a good idea, but does it really even begin to provide the solution to these complex problems?

  1. They are taking a "bottom-up" approach to interoperability (system, data , and process) and trying to leverage existing technologies (like DICOM and HL7).  Makes sense. But other than academic or government institutions what's the incentive for private  companies (like EMRs) to participate?
  2. How is an improved underlying infrastructure going to reduce the chaotic nature of the health delivery system (hospitals, insurance companies, Medicare, etc.)? It's like putting the cart before the horse.

This is the dilemma. We can come up with clever and even ingenious technical solutions in our little IT world, but none of them are going to be game changers.   The availability of a great technologies are not enough to change the institutional processes that make an organization inefficient or communication ineffective.

The solution is in the people and the processes they follow. The best example I can think of is EMR adoption. Everybody knows why the rate of conversion from a paper to a paperless office is so low.  It's mostly because of people's resistance to change the way they've "always done it."  Change is hard, and in this case HIT is the barrier to adoption, no mater how good the EMR solution is.

At the national level Healthcare IT only enables interoperability and improved data management.  The chaos can only be solved by first changing U.S. Healthcare delivery policies.  Whatever the changes are, they will then determine the incentives and processes that actually drive the system and put HIT to use.

For Healthcare IT, the NCHI is just one example. There are a whole bunch of other technology-driven initiatives that also have high hopes.  I'm not saying we should stop developing great technologies.  We just shouldn't be surprised when they don't change the world.

Happy Presidents Day!

UPDATE (8/4/10): Martin Fowler's UtilityVsStrategicDichotomy post is another perspective on "IT Doesn't Matter".

*I thought the Webcast was very well done.  It had split screen (speaker and slides) along with multiple camera views that included the audience. The video quality wasn't great (it really didn't need to be) but the streaming was reliable.  Also, the web participants could chat among themselves and the on-site staff and ask the speaker questions.

The Challenges of Developing Software for Medical Devices

Monday, February 8th, 2010

Developing Software for Medical Devices – Interview with SterlingTech gives a good overview of the challenges that especially face young medical device companies. In particular (my emphasis):

Make sure that your company has a good solid Quality System as it applies to software development. Do not put a Quality System into place that you can not follow. This is the cause of most audit problems.

I couldn't have said it better myself, though I think that focusing on the FDA may distract you from why you're creating software quality processes in the first place. The real purpose of having software design controls is to produce a high quality, user friendly, robust, and reliable system that meets the intended use of the device.  If your quality system does that, you won't have to worry about FDA audits.

Since Klocwork is a static analysis tool company I also want to point out a recent related article that's worth reading -- and trying to fully understand:

A Few Billion Lines of Code Later: Using Static Analysis to Find Bugs in the Real World

Note the user comment by Bjarne Stroustrup.

UPDATE (2/9/10): Here's another good code analysis article:

A Formal Methods-based verification approach to medical device software analysis

Medical Device Software on Shared Computers

Monday, September 7th, 2009

ECG PCThe issues raised in Tim's post Running Medical Device Software on Shared Computers literally opens Pandora's box. Installation of medical device software on general purpose computers is an intractable problem.

It's very similar to the complications associated with Networked Medical Devices, except worse.  An FDA approved device in a changing network environment is one thing.  Software that controls a medical device on a PC that is open for the user to install operating system upgrades, applications, and other device drivers is a recipe for disaster.

I don't care how obsessed a vendor is, there is no way for a medical device manufacturer to verify proper operation for all possible hardware and software environments.

With today's PC architectures, the highest risk area is at the device driver level. Running multiple devices that require even modest I/O bandwidth can cause interference that could result in lost or significantly delayed data. This is especially true with Windows XP or Vista that do not inherently provide any real-time data processing capabilities.

I think the best strategy is to provide stand-alone medical devices that have no dependencies on the PC hardware and software that may be available for down-stream data processing and display. This not only reduces compatibility risk, but it can also address mobility issues. With miniaturization and wireless capabilities, the medical device can now travel with the patient.

Also, with Pandora's box safely closed, solving the networked medical device issues suddenly feels manageable.

UPDATE (9/15/09): Here's an interesting take on this subject from the consumer perspective: Should Medical Devices Multitask?

When Cell Phones Become Medical Devices

Tuesday, July 14th, 2009

airstripMobile devices are quickly becoming the conduit of choice for collecting and disseminating clinical data.  The FDA will soon be forced to step in and take regulatory control.  It’s going to happen eventually.

Bradley Merrill Thompson does a good job of outlining the factors that lead to FDA oversight in the article FDA may regulate certain mobile phones, accessories. The Components vs. Accessories distinction is an important one for manufacturers — regulatory oversight is dependent on who buys it. The “intended use”, labeling, and marketing are also factors.

Because of its unique user interface, display, and broadband capabilities the Apple iPhone is a particularly attractive platform for medical applications. For example, the AirStrip OB application is available for download at the Apple App Store and is FDA cleared. Other modalities, like the Critical Care monitor application (shown) is still in testing.

The “intended use” issues are complex.  A cell phone that is used to communicate clinical information, e.g. to a PHR, essentially becomes part of a Networked Medical Device.

This mean that 510(k) premarket notification may also be necessary under the proposed Medical Device Data System (MDDS) rules.  If you read though what constitutes a MDDS, you can see how well the definitions fit mobile device functionality:

  • The electronic transfer or exchange of medical device data from a medical device.
  • The electronic storage and retrieval of medical device data.
  • The electronic display of medical device data.
  • The electronic conversion of medical device data from one format to another format.

Its not the end of the world to be classified as a medical device, but verification and validation of these functions are not a trivial endeavor (see here).

The FDA is almost certainly looking and will be taking action soon.

UPDATE (7/25/09): Here's a mobile device that does not appear to have FDA approval: EKG On Your Mobile Wherever You Are

UPDATE (11/24/09): When will the FDA drop the gavel?

UPDATE (4/19/11): Navigating Regulatory Uncertainty for Smartphones

 

Software Verification vs. Validation

Thursday, March 26th, 2009

For some reason it just really bugs me that these two terms are incorrectly interchanged so frequently.

Part of the problem is that the document General Principles of Software Validation; Final Guidance for Industry and FDA Staff (2002) does not do a good job of differentiating actual verification and validation activities. They just call everything validation.

The recent MD&DI article Building Quality into Medical Device Software provides a pretty good overview of the these regulatory requirements, but is a another case in point.  The article talks about "software validation" at every step just like the FDA document.

Another similar article on this subject is Software Validation: Turning Concepts into Business Benefits.  It is also confused. e.g. (my highlight):

... software validation involves the execution of tests designed to cover each of the specific system requirements.

No, testing specific requirements is a verification activity! It's no wonder most people are confused.

These definitions, Difference between Verification and Validation, are better as they highlight the sequencing of activities:

Verification takes place before validation, and not vice versa. Verification evaluates documents, plans, code, requirements, and specifications. Validation, on the other hand, evaluates the product itself.

From here (warning PDF):
verification vs. validation

Validation activities (usability testing, user feedback, etc.) are much harder to define, execute, and document properly than most verification testing.

Here are the golden rules:

Verificationwas the product built right?

Validation: was the right product built?

I guess I should get over it...

UPDATE (5/12/09):  Good definitions from here: Diagnosing Medical Device Software Defects Using Static Analysis:

Verification and validation are terms that are often used in software. However, it is important to understand the difference between these two distinct but complementary activities. Software verification provides objective evidence that the design outputs of a particular phase of the software development life cycle meet all of the specified requirements for that phase by checking for consistency, completeness, and correctness of the software and its supporting documentation. Validation, on the other hand, is the confirmation by examination and provision of objective evidence that software specifications conform to user needs and intended uses, and that the particular requirements implemented through software can be consistently fulfilled.

UPDATE (8/6/09):  The importance of proper V&V can not be overstated. The FDA is watching: FDA still enforcing regulations for validation of enterprise software.

UPDATE (2/11/10): I just noticed that the guidance document link on the FDA site was changed and fixed it. When I reviewed the document I found that even though it was "issued" in Jan. 2002 it had been recently updated (11/6/09). The later sections (4, 5, and 6) still use the term validation generically, but the updated document does distinguish between verification and validation:

3.1.2 Verification and Validation

The Quality System regulation is harmonized with ISO 8402:1994, which treats "verification" and "validation" as separate and distinct terms. On the other hand, many software engineering journal articles and textbooks use the terms "verification" and "validation" interchangeably, or in some cases refer to software "verification, validation, and testing (VV&T)" as if it is a single concept, with no distinction among the three terms.

Software verification provides objective evidence that the design outputs of a particular phase of the software development life cycle meet all of the specified requirements for that phase. Software verification looks for consistency, completeness, and correctness of the software and its supporting documentation, as it is being developed, and provides support for a subsequent conclusion that software is validated. Software testing is one of many verification activities intended to confirm that software development output meets its input requirements. Other verification activities include various static and dynamic analyses, code and document inspections, walkthroughs, and other techniques.

Software validation is a part of the design validation for a finished device, but is not separately defined in the Quality System regulation. For purposes of this guidance, FDA considers software validation to be "confirmation by examination and provision of objective evidence that software specifications conform to user needs and intended uses, and that the particular requirements implemented through software can be consistently fulfilled." In practice, software validation activities may occur both during, as well as at the end of the software development life cycle to ensure that all requirements have been fulfilled.

More Software Forensics and Why Analogies Suck

Tuesday, July 1st, 2008

There's a recent article in the Baltimore Sun called Flaws in medical coding can kill which just rehashes static software  analysis (hat tip: FDA Trying to Crack Down on Software Errors).

I've discussed software forensics tools before. Yes, bad software has hurt and killed people, and there's no excuse for it.  I still don't think an expensive automated software tool is the silver bullet (which is implied by the article) for solving these problems.

But here's what really bugs me:

"If architects worked this way, they'd only be able to find flaws by building a building and then watching it fall down"

This is a prime example of why analogies suck.  The quote is supposed to somehow bolster the FDA's adoption of "new forensic technology". If you stop and think about it, it does just the opposite.

I guess you first have to consider the source --  a VP of Engineering for a forensic software vendor. This is exactly what a you'd expect to hear in a sales pitch.

What's truly ironic though is that a static analysis tool can only be used on source code! Think about it. Source code is the finished product of the software design and development process. Also, forensic science, by definition is the presentation of something that has already happened. It can only be done after the fact.

The logical conclusion you would draw from the analogy is that static analysis is probably useless because the building is already up!  If you step back and look at the full software quality process, this may well be true.

I'm not saying that static analysis tools don't have value. Like all of the other software tools we use, they have their place.

Just beware when you try to use an analogy to make a point.

UPDATE (7/5/08):

Here's another take on medical device bugs: When bugs really do matter: 22 years after the Therac 25.

UPDATE (7/16/08):
From Be Prepared: Software Forensics Gaining Steam at FDA, David Vogel of ­Intertech Engineering Associates says:

... that static tools are hyped to do more than they can actually deliver. “Static analysis looks for simple coding errors and does not apply heuristics to understand how it will perform dynamically because it is a static analysis tool”

I agree.

UPDATE (7/26/08):

Another reference : Are hospitals really safe?

UPDATE (9/16/08):

A couple more related articles:

Applying Static Analysis To Medical Device Software

Using static analysis to diagnose & prevent failures in safety-critical device designs

UPDATE (9/27/08):

Architecting Buildings and Software: Software architects are an important component in the creation of quality software and need to continue to refine and improve their role in the development process.  No matter how you try to bend and twist it though, the building analogy will always be problematic -- so why bother? Maybe that "intuitive understanding" of the construction industry just distracts us from being innovative about what's required to build software.

UPDATE (12/1/08): If Jeff wasn't a programmer he'd be a farmer: Tending Your Software Garden

Connecting Computers to FDA Regulated Medical Devices

Wednesday, June 18th, 2008

Pete Gordon asked a couple of questions regarding FDA regulations for Internet-based reporting software that interface with medical devices. The questions are essentially:

  1. How much documentation (SRS, SDS, Test Plan) is required and at what stage can you provide the documentation?
  2. How does the FDA view SaaS architectures?

The type of software you're talking about has no real FDA regulatory over site. The FDA has recently proposed new rules for connectivity software. I've commented on the MDDS rules, but Tim has a complete overview here: FDA Issues New MDDS Rule. As Tim notes, if the FDA puts the MDDS rules into place and becomes more aggressive about regulation, many software vendors that provide medical devices interfaces will be required to submit 510(k) premarket approvals.

Dealing with the safety and effectiveness of medical devices in complex networked environments is on the horizon. IEC 80001 (and here) is a proposed process for applying risk management to enterprise networks incorporating medical devices.  My mantra: High quality software and well tested systems will always be the best way to mitigate risk.

Until something changes, the answer to question #1 is that if your software is not a medical device, you don't need to even deal with the FDA. The answer to question #2 is the same. The FDA doesn't know anything about SaaS architectures unless it's submitted as part of a medical device 510(k).

I thought I'd take a more detailed look at the architecture we're talking about so we can explore some of the issues that need to be addressed when implementing this type of functionality.

mdds2.jpg

This is a simplified view of the way medical devices typically interface to the outside world. The Communications Server transmits and receives data from one or more medical devices via a proprietary protocol over whatever media the device supports (e.g. TCP/IP, USB, RS-232, etc.).

In addition to having local storage for test data, the server could pass data directly to an EMR system via HL7 or provide reporting services via HTTP to a Web client.

There are many other useful functions that external software systems can provide. By definition though, a MDDS does not do any real-time patient monitoring or alarm generation.

Now let's look at what needs to be controlled and verified under these circumstances.

  1. Communications interaction with proper medical device operation.
  2. Device communications protocol and security.
  3. Server database storage and retrieval.
  4. Server security and user authentication.
  5. Client/server protocol and security.
  6. Client data transformation and presentation to the user (including printed reports).
  7. Data export to others formats (XML, CSV, etc.).
  8. Client HIPAA requirements.

Not only is the list long, but these systems involve the combination of custom written software (in multiple languages), multiple operating systems, configurable off-the-shelf software applications, and integrated commercial and open source libraries and frameworks. Also, all testing tools (hardware and software) must be fully validated.

One of the more daunting verification tasks is identifying all of the possible paths that data can take as it flows from one system to the next. Once identified, each path must be tested for data accuracy and integrity as it's reformatted for different purposes, communications reliability, and security. Even a modest one-way store-and-forward system can end up with a hundred or more unique data paths.

A full set of requirements, specifications, and verification and validation test plans and procedures would need to be in place and fully executed for all of this functionality in order to satisfy the FDA Class II GMP requirements. This means that all of the software and systems must be complete and under revision control. There is no "implementation independent" scenario that will meet the GMP requirements.

It's no wonder that most MDDS vendors (like EMR companies) don't want to have to deal with this. Even for companies that already have good software quality practices in place, raising the bar up to meet FDA quality compliance standards would still be a significant organizational commitment and investment.

The Benefits of Software Validation

Monday, March 24th, 2008

Many people still confuse verification (was the product built right?) and validation (was the right product built?). The benefits of both of these activities are well covered in Software Validation: Turning Concepts into Business Benefits:

Potential benefits of software validation and verification.

Software V&V is a FDA requirement, but the same methodologies can be used to improve non-device software as well.

Corrective and Preventive Action (CAPA) and the FDA

Tuesday, March 11th, 2008

It’s Easy to Get into Trouble Because of CAPA summaries an audio conference on Implementing an Effective CAPA System: What You Need to Know (pdf) sponsored by FOI Services.

Creating a corporate culture that can carry out a quality system plan is a big challenge, especially for new medical device companies. As this figure aptly shows (hacked from page 42), aligning the organization with a formal project Mission Statement, Scope, Goals, and Communication Plan is a key component for implementing an effective CAPA system.

Developing an Effective Nonconformance Control and CAPA system

The survey statistics on the use of risk management tools and CAPA effectiveness are interesting. Of particular interest of course are the number of firms sited by the FDA for CAPA deficiencies. The slides detailing CAPA implementation and advice are also quite informative.

I stared at the "Big C" (page 23) for a while. I know it's trying to show the relationship between CAPA (820.100, "little c"?) and the rest of the quality system. I thought I figured it out a couple of times, but in the end I was not able to decipher its true meaning. I could buy the audio CD to find out or maybe Jan can just clarify it for us.

Medical Device Data System (MDDS) FDA Rule Changes

Friday, February 8th, 2008

As reported here and here, the FDA is proposing to reclassify a MDDS from a Class III to a Class I medical device. On the surface this might seem like a big deal. If you read the fine print though (my highlight):

FDA has already recognized that the class III requirements are not necessary for ensuring the safety and effectiveness of MDDS devices and has been exercising enforcement discretion with MDDS device manufacturers. These firms have not been required to submit PMAs or meet other requirements typically required of manufacturers of class III devices, but the agency believes that all or nearly all firms in this industry have in place good business practices, including quality systems.

They're just formalizing already established practices.