Archive for the ‘Software Quality’ Category

OTS/SOUP Software Validation Strategies

Thursday, February 16th, 2012

My last discussion of Off-The-Shelf software validation only considered the high-level regulatory requirements.  What I want to do now is dig deeper into the strategies for answering question #5:

How do you know it works?

This is the tough one. The other questions are important, but relative to #5, answering them is pretty easy.  How to answer this question (i.e. accomplish this validation) is the source of a lot of confusion.

There are many business and technical considerations that go into the decision to use OTS or SOUP software as part of a medical device. Articles and books are available that include guidance and general OTS validation approaches. e.g. Off-the-Shelf Software: A Broader Picture (warning PDF) is very informative in this regard:

  • Define business’ use of the system, ideally including use cases and explicit clarification of in-scope and out-of-scope functionality
  • Determine validation deliverables set based on system type, system risk, project scope, and degree of system modification
  • Review existing vendor system and validation documentation
  • Devise strategy for validation that leverages vendor documentation/systems as applicable
  • Create applicable system requirements specification and design documentation
  • Generate requirements-traceable validation protocol and execute validation
  • Put in place system use, administration, and maintenance procedures to ensure the system is used as intended and remains in a validated state

This is great stuff, but unfortunately it does not help you answer question #5 for a particular type of software. That's what I want to try to do here.

OTS really implies Commercial off-the-shelf (COTS) software. The "commercial" component is important because it presumes that the software in question is a purchased product (typically in a "shrink-wrapped" package) that is designed, developed, and supported by a real company.  You can presumably find out what design controls and quality systems are in place for the production of their software and incorporate these findings into your own OTS validation.  If not, then the product is essentially SOUP (keep reading).

Contrast OTS with Software of Unknown Provenance (SOUP).  It is very unlikely that you can determine how this software was developed, so it's up to you to validate that it does what it's supposed to do.  In some instances this may be legacy custom software, but these days it probably means the integration of an open source program or library into your product.

This following list is by no means complete. It is only meant to provide some typical software categories and the strategies used for validating them.  Some notes:

  • I've included a Hazard Analysis section in each category because the amount of validation necessary is dependent on the level of concern.
  • The example requirements are not comprehensive. I just wanted to give you a flavor for what is expected.
  • Always remember, requirements must be testable.  The test protocol has to include a pass/fail criteria for each requirement. This is QA 101, but is often forgotten.
  • I have not included any example test protocol steps or reports.  If you're going to continue reading, you probably don't need help in that area.

Operating Systems

Examples:

  • Windows XP SP3
  • Windows 7 32-bit and 64-bit
  • Red Hat Linux

Approach:

  1. Hazard Analysis: Do a full  assessment of the risks associated with each OS.
    • Pay particular attention to the hazards associated with device and device driver interactions.
    • List all hazard mitigations.
    • Provide a residual Level of Concern (LOC) assessment after mitigation -- hopefully this will be negligible.
    • If the residual LOC is major, then Special Documentation can still be provided to justify its use.
  2. Use your full product verification as proof that the OS meets the OTS requirements. This has validity since your product will probably only be using a small subset of the full capabilities of the OS.  All of the other functionality that the OS provides would be out of scope for your product.
  3. This means that a complete re-validation of your product is required for any OS updates.
  4. There is no test protocol or report with this approach. The OS is considered validated when the product verification has been successfully completed.

Compilers

Examples:

  • Visual Studio .NET 2010  (C# or C++)
Approach:
  1. Hazard Analysis:
    • For a vast majority of cases, I think it is safe to say that a compiler does not directly affect the functioning of the software or the integrity of the data.  What a program does (or doesn't do) depends on the source code, not on the compiled version of that code.
    • The compiler is also not responsible for faults that may occur in devices it controls. The application just needs to be written so that it handles these conditions properly.
    • For some embedded applications that use specialized hardware and an associated compiler, the above will not necessarily be true. All functionality of the compiler must be validated in these cases.
  2. For widely used compilers (like Microsoft products) full product verification can be used as proof of the OTS requirements.
  3. Validation of a new compiler version , e.g. upgrading from VS 2008 to VS 2010: Showing that the same code base compiles and all Unit Tests pass in both can be used as proof. This assumes of course that the old version was previously validated.
  4. The compiler is considered fit for use after the product verification has passed so there is also no test protocol or report in this case.

Integrated Libraries

Examples:

Approach:
  1. Hazard Analysis: Both of these open source libraries are integrated into the product software.  The impact on product functioning, in particular data integrity, must be fully assessed.
  2. You first list the requirements that you will be using. For example, typical logging functionality that might include:
    • The logging system shall be able to post an entry labeled as INFO in a text file .
    • The logging system shall be able to post an entry labeled as INFO in a LEVEL column of a SQL Server database.
    • ... same for ERROR, DEBUG, WARN, etc.
    • The logging system shall include time/date and other process information formatted as "YYYY-MM-DD HH:MM:SS..." for each log entry.
    • The logging system shall be able to log exceptions at all log levels, and include full stack traces.
  3. For database functionality, listing basic CRUD requirements plus other specialized needs can be done in the same way.
  4. I have found that the easiest way to test these kinds of requirements is to simply write unit tests that prove the library performs the desired functionality.  The unit tests are essentially the protocol and a report showing that all asserts have passed is a great artifact.

Version Control Systems

Examples:

Approach:
  1. Hazard Analysis: These are configuration management tools and are not part of the product. As such, the level of concern is generally low.
  2. As above,  you first list the specific functionality that you expect the VCS to perform. Here are some examples of the types of requirements that need to be tested:
    • The product shall be able to add a directory to a repository.
    • The product shall be able to add a file to a repository.
    • The product shall be able to update a file in a repository.
    • The product shall be able to retrieve the latest revision of files and directories.
    • The product shall be able to  branch a revision of files and directories.
    • The product shall be able to merge branched files and directories.
  3. You then write a protocol that tests each one. This would include detailed instructions on how to perform these operations along with the pass/fail criteria for each requirement.

Issue Tracking Tools

Examples:

Approach:
  1. Hazard Analysis: These tools are used for the management of the development project. Again, the level of concern is generally low.
  2. You only need to validate the functionality you intend to use.  The features that you don't use do not need to be tested.
  3. You simply need to test the specific functionality.  Some example requirements -- the roles, naming conventions, and workflow will of course depend on your organization and the tool being used:
    • A User shall be able to create a new issue.
    • A User shall be able to comment on an issue.
    • A Project Manager shall be able to assign an issue to a Developer.
    • A Developer shall be able change the state of an issue to 'ready for test'.
    • A Tester shall be able to change the state of an issue to 'verified'.
    • The tool shall be able to send e-mail notifications when an issue has been modified.
    • An Administrator shall be able to define a milestone.
  4. A protocol with detailed instructions and pass/fail criteria is executed and reported on.

Validation is a lot of work but is necessary to ensure that all of the tools and components used in the development of medical device software meet their intended functionality.

Building Safety into Medical Device Software

Saturday, January 7th, 2012

The article Build and Validate Safety in Medical Device Software takes a critical look at the current processes for medical device software and concludes:

The complexity of the software employed in many medical devices has rendered inadequate traditional methods (testing) for demonstrating their safety.

The article then provides examples of the types of analyses that can be performed to better ensure safety.

Interesting read.

Here are some references:

BohrBug: Not necessarily easy to find, but once discovered is reproducible.

Heisenbug: The ever-annoying bug that can not be reliably reproduced.

Spin: An open-source software tool for formal verification of distributed software systems.

Discomfort with Computerized Medical Devices

Monday, April 11th, 2011

Here are some thoughts regarding the article: I feel a little uncomfortable about computerized medical devices, and here's why.

  • Just about all medical devices are computerized these days. Most will not harm or kill you if their software fails (Class I & II), but that's no excuse for writing crappy code.
  • As pointed out, the mission critical nature of mass transit systems (airplanes, subways, etc.) affords those industries a much higher level of scrutiny then cars or medical devices ever will. But that's still no excuse for writing crappy code.

Even through drugs and airplanes need advance approval from authorities before being brought to the market, medical devices and software do not, at least in the United States.

  • This statement is not correct. All medical devices, including diagnostic and therapeutic software-only products, require FDA clearance to be sold in the US market (see the Class I & II link above).  There are many exemptions, but a 510k pre-market approval is generally a minimum requirement. After you receive approval the FDA can pull your device off the market (the dreaded “recall”) at any time due to complaints or unsatisfactory audit results.
  • The FDA QSR Subpart C (§ 820.30) looks a lot like DO-178B as quality system design controls go, but I'm sure the aviation standard enforcement is far more rigorous (well, at least I hope it is). It's true, there are no coding standards for medical device software.  Good companies set their own development standards and practices -- some even use static analysis! It's all the other companies that don't bother to do anything that you have to worry about.

I'm certain that static analysis technology has improved vastly in the four years since some of the articles below were written.  The challenge is that the complexity of medical device software and the systems they run on has also increased tremendously during that time. In particular, the explosion of high bandwidth wireless networks along with advances in handheld computing power and graphics capability (think iPhone/iPad, of course) is fundamentally changing the way medical devices will be developed and delivered to the market in the future.

Static analysis will remain a valuable tool for identifying critical software defects, but new methods will have to be developed for rooting out risks in the new network-connected, multi-touch world.

It's sad to say, but you should probably be more than "a little uncomfortable."

Other static analysis articles:

A Few Billion Lines of Code Later: Using Static Analysis to Find Bugs in the Real World
More Software Forensics and Why Analogies Suck
Medical Device Software Forensics
Pascal's 3 part static analysis series that starts here:
Guest Article: Static Analysis in Medical Device Software (Part 1) — The Traps of C

Agile Software Development in Regulated Environments

Saturday, October 16th, 2010

As part of a series on High Assurance Agile Development in Regulated Environments is the article
Agile Software Development in Regulated Environments Example: Medical Devices. The purpose of this article and future posts is to introduce the FDA regulatory landscape and then

... see what we can do to “agilify” our practices under these standards as we move forward.

It's been three years since I wrote Agile development in a FDA regulated setting.  I'll be interested to see if the application of "agile, high assurance activities" in this environment -- and the associated issues -- have changed since then.

UPDATE (10/23/10): Can and should agile be used for medical device development? Absolutely!

UPDATE (11/27/10): More discussion here: Can Agile Software Methods be used in medical device software development?

UPDATE (11/28/10): Agile Medical Device Software Development?

UPDATE (12/17/10): GE Healthcare Goes Agile

UPDATE (1/5/11): Missed this one: Four Reasons Medical Device Companies Need Agile Development

Technical Debt in Medical Software

Wednesday, August 4th, 2010

Software development is software development. Most of the life cycle and quality issues faced in medical software are the same challenges for any software product. Technical Debt in Medical Software points out what technical debt is:

  • Complexity
  • Code Duplication
  • Documentation Debt
  • Testing Debt
  • Architectural Debt

A Martin Fowler article is referenced that nicely identifies the source of technical debt:

The benefits of paying down the debt are:

  • Increased R&D efficiency and improved time to market
  • Hitting commitment dates
  • Performance and technology upgrades

Of course if you don't want to pay it off, there's always the option to go bankrupt. This may have long-term advantages, but it will surely be a more expensive route. There is one statement in this regard that I think needs some qualification:

In this case the technical debt can be retired along with the legacy system, and like filing Chapter 11, you are no longer responsible to address all the sins of the past.

I know this refers to code sins, but just because you decide to do a re-write doesn't mean you no longer have responsibility for the legacy product. You still have customers using the old software that you're obligated to continue to support.  For FDA approved medical software, this is a legal requirement. Most of the time this means that the legacy code will need to be maintained and periodically updated in the field, sometimes even after the "new" product is released. This just makes the cost of bankruptcy even higher.

ISO 62304: The Harmonized Standard for Medical Device Software Development

Saturday, June 5th, 2010

The FDA approved ISO 62304 as a recognized software development standard in 2009. Developing Medical Device Software to ISO 62304 gives a nice overview.

Besides providing a globally accepted development process one of the other practical components is the assignment of a safety class to individual software items and units:

  • Class A: No injury or damage to health is possible
  • Class B: Non-serious injury is possible
  • Class C: Death or serious injury is possible

Each classification changes the required documentation for the assigned software.

These standards will become more widely known as the FDA moves to regulate the proliferation of medical applications for personal and home use, most notably software that runs on mobile devices. I've discussed this before in When Cell Phones Become Medical Devices. As noted more recently in FDA oversight may extend throughout health IT:

... an FDA director stated flatly: "Under the Federal Food, Drug and Cosmetic Act, HIT software is a medical device."

Broad FDA oversight at the QSR/62304 level will probably not happen, but change is certainly coming for many HIT companies.

The Elsmar Cove Forum IEC 62304 - Medical Device Software Life Cycle Processes has a lot of discussion on this topic. This is where I found a document checklist that is useful for understanding the process scope:

IEC62304_Checklist.xls (Excel spreadsheet)

UPDATE (9/9/10): IEC 62304 – The Basics

The Software Quality Balancing Act

Saturday, May 15th, 2010

Andrew Dallas's article Caution: V&V May Be Hazardous to Software Quality touches on a number of good points regarding software quality best practices.

Medical device software development V&V (also see here) and the documentation that goes with it have substantial costs. Any strategy that can reduce this overhead and still meet the necessary quality standards should be seriously considered.

The use of "incremental" software development approaches really refers to Agile methodologies.  I've talked about the use of Agile for medical device software development several times:

Most of the discussion revolves around the risks associated with this approach. The benefits of any process change have to be weighed against the possible risks that might be introduced.

Besides the importance of understanding what V&V documentation the FDA actually wants to see, Andrew makes a great point about producing quality software versus the V&V process (my highlight):

V&V is not software testing. Verification testing ensures specified requirements have been fulfilled. Validation testing ensures that particular requirements for a specific intended use can be consistently fulfilled.

Following the required FDA V&V processes alone is not sufficient to ensure software quality. You also have to adhere to software development best practices at all levels. For example, in addition to non-functional requirements there are many software quality factors that require careful design considerations and testing that you may decide are outside the scope of FDA reporting.  Deciding what to report and what to leave out is the balancing act.

To Validate and Verify: Software Issues Solved

Tuesday, April 6th, 2010

Yours truly was interviewed for this article:

To Validate and Verify: Software Issues Solved

"V&V" is one of those topics that should be simple to understand, but for some reason is the source of a lot of confusion. This is evident in the comments on Software Verification vs. Validation.

It is also interesting to note that the differing interpretations of these definitions results in a wide variety of V&V strategies and plans. From a regulatory point of view there is no single right or wrong way to do it. It's similar to the implementation of quality systems in general.  If you say you are going to do something you need to be able to prove that you're actually doing it.

Guest Article: Static Analysis in Medical Device Software (Part 3) — Formal specifications

Sunday, March 7th, 2010

Pascal Cuoq at Frama-C continues his discussion of static analysis for medical device software. Also see Part 1 and Part 2.

In May 2009, I alluded to a three-part blog post on the general topic of static analysis in medical device software. The ideas I hope will emerge from this third and last part are:

  1. Formal specifications are good,
  2. Partial formal specifications are underrated, and
  3. One should never commit in advance to writing anything, however easy it seems it will be at the time.

Going back to point one, a "functional specification" is a description of what a system/subsystem does. I really mostly intend to talk about formal versions of functional specifications. This only means that the description is written in a machine-parsable language with an unambiguous grammar. The separation between formal and informal specifications is not always clear-cut, but this third blog entry will try to convince you of the advantages of specifications that can be handled mechanically.

Near the bottom of the V development cycle, "subsystem" often means software: a function, or a small tree of functions. A functional specification is a description of what a function (respectively, a tree of functions) does and does not (the time they take to execute, for instance, is usually not considered part of the functional specification, although whether they terminate at all can belong in it. It is only a matter of convention). The Wikipedia page on "Design by Contract" lists the following as making up a function contract, and while the term is loaded (it may evoke Eiffel or run-time assertion checking, which are not specifically the topic here), the three bullet points below are a good categorization of what functional specifications are about:

  • What does the function expect, what rules should the caller obey at the time of calling it?
  • What does the function guarantee, what is the caller allowed to expect from the function's results?
  • What properties does the function maintain?

I am tempted to point out that invariants maintained by a function can be encoded in terms of things that the function expects and things that the function guarantees, but this is exactly the kind of hair-splitting that I am resolved to give up on.

The English sentence "when called, this function may modify the global variables G and H, and no other" is almost unambiguous and rigorous — assuming that we leave aliasing out of the picture for the moment. Note that while technically something that the function ensures on return (it ensures that for any variable other than G or H, the value of the variable after the call is the same as its value before the call), this property can be thought of more intuitively as something that the function maintains.

The enthusiastic specifier may like the sentence "this function may modify the global variables G and H, and no other" so much that he may start copy-pasting the boilerplate part from one function to another. Why should he take the risk to introduce an ambiguity accidentally? Re-writing from memory may lead him to forget the "may" auxiliary, when he does not intend to guarantee that the function will overwrite G and H each time it is called. Like for contracts of a more legal nature, copy-pasting is the way to go. The boilerplate may also include jargon that make it impossible to understand by someone who is not from the field, or even from the company, whence the specifications originate. Ordinary words may be used with a precise domain-specific meaning. All reasons not to try to paraphrase and to reuse the specification template verbatim.

The hypothetical specifier may particularly appreciate that the specification above is not only carefully worded but also that a list of possibly modified globals is part of any wholesome function specification. He may — rightly, in my humble opinion — endeavor to use it for all the functions he has to specify near the end of the descending branch of the V cycle. This is when he is ripe for the introduction of a formal syntax for functional specifications. According to Wikipedia, Robert Recorde introduced the equal sign "to auoide the tediouſe repetition of [...] woordes", and the sentence above is a tedious repetition begging for a sign of its own to replace it. When the constructs of the formal language are well-chosen, the readability is improved, instead of being diminished.

A natural idea for to express the properties that make up a function contract is to use the same language as for the implementation. Being a programming language, it is suitably formal; the specifier, even if he is not the programmer, is presumably already familiar with it; and the compiler can transform these properties into executable code that checks that preconditions are properly assured by callers, and that the function does its own part by returning results that satisfy its postconditions. This choice can be recognized in run-time assertion checking, and in Test-Driven Development (in Test-Driven Development, unit tests and expected results are written before the function's implementation and are considered part of the specification).

Still, the choice of the programming language as the specification language has the disadvantages of its advantages: it is a programming language; its constructs are optimized for translation to executable code, with intent of describing algorithms. For instance, the "no global variable other than G and H is modified" idiom, as expressed in C, is a horrible way to specify a C function. Surely even the most rabid TDD proponent would not suggest it for a function that belongs in the same C file as a thousand global variable definitions.

A dedicated specification language has the freedom to offer exactly the constructs that make it pleasant to write specifications in it. This means directly including constructs for commonly recurring properties, but also providing the building blocks that make it possible to define new constructs for advanced specifications. So a good specification language has much in common with a programming language.

A dedicated specification language may for instance offer

as a synonym for the boilerplate function specification above, and while this syntax may seem wordy and redundant to the seat-of-the-pants programmer, I hope to have convinced you that in the context of structured development, it fares well in the comparison with the alternatives. Functional specifications give static analyzers that understand them something to chew on, instead of having to limit themselves to the absence of run-time errors. This especially applies to correct static analyzers as emphasized in part 2 of this oversize blog post.

Third parties that contact us often are focused on using static analysis tools to do things they weren't doing before. It is a natural expectation that a new tool will allow you to do something new, but a more accurate description of our activity is that we aim to allow to do the same specification and verification that people are already doing (for critical systems), better. In particular, people who discover tools such as Frama-C/Jessie or other analysis tools based on Hoare-Floyd precondition computations often think these tools are intended for verifying, and can only be useful for verifying, complete specifications.

A complete specification for a function is a specification where all the properties expected for the function's behavior have been expressed formally as a contract. In some cases, there is only one function (in the mathematical sense) that satisfies the complete specification. This does not prevent several implementations to exist for this unique mathematical function. More importantly, it is nice to be able to check that the C function being verified is one of them!

Complete specifications can be long and tedious to write. In the same way that a snippet of code can be shorter than the explanation of what it does and why it works, a complete specification can sometimes be longer than its implementation. And it is often pointed out that these specifications can be so large that once written, it would be too difficult to convince oneself that they do not themselves contain a bug.

But just because we are providing a language that would allow you to write complete specifications does not mean that you have to. It is perfectly fine to write minimal formal specifications accompanied with informal descriptions. To be useful, the tools we are proposing only need to do better than testing (the most widely used verification technique at this level). Informal specifications traditionally used as the basis for tests are not complete either. And there may be bugs in both the informal specification or in its translation into test cases.

If anything, the current informal specifications leave out more details and contain more bugs, because they are not machine-checked in any way. The static analyzer can help find bugs in a specification in the same way that a good compiler's sanity checks and warnings help avoid the stupidest bugs in a program.

In particular, because they are written in a dedicated specification language, formal specifications have better composition properties than, say, C functions. A bug in the specification of one function is usually impossible to overlook when trying to use said specification in the verification of the function's callers. Taking an example from the tutorial/library (authored by our colleagues at the applied research institute Fraunhofer FIRST) ACSL by example, the specification of the max_element function is quite long and a bug in this specification may be hard to detect. The function max_element finds the index of the maximum element in an array of integers. The formal version of this specification is complicated by the fact that it specifies that if there are several maximum elements, the function returns the first one.

Next in the document a function max_seq for returning the value of the maximum element in an array is defined. The implementation is straightforward:

The verification of max_seq builds on the specification for max_element. This provides additional confidence: the fact that max_seq was verified successfully makes a bug in the specification of max_element unlikely. Not only that, but if the (simpler, easier to trust) specification for max_seq were the intended high-level property to verify, it wouldn't matter that the low-level specification for max_element was not exactly what the specifier intended (say, if there was an error in the specification of the index returned in case of ties): the complete system still has no choice but to behave as intended in the high-level specification. Unlike a compiler that lets you put together functions with incompatible expectations, the proof system always ensures that the contract used at the call point is the same as the contract that the called function was proved to satisfy.

And this concludes what I have to say on the subject of software verification. The first two parts were rather specific to C, and would only apply to embedded code in medical devices. This last part is more generic — in fact, it is easier to adapt the idea of functional specifications for static verification to high-level languages such as C# or Java than to C. Microsoft is pushing for the adoption of its own proposal in this respect, Code Contracts. Tools are provided for the static verification of these contracts in the premium variants of Visual Studio 2010. And this is a good time to link to this video. Functional specifications are a high-level and versatile tool, and can help with the informational aspects of medical software as well as for the embedded side of things. I would like to thank again my host Robert Nadler, my colleague Virgile Prevosto and department supervisor Fabrice Derepas for their remarks, and twitter user rgrig for the video link.

The Challenges of Developing Software for Medical Devices

Monday, February 8th, 2010

Developing Software for Medical Devices – Interview with SterlingTech gives a good overview of the challenges that especially face young medical device companies. In particular (my emphasis):

Make sure that your company has a good solid Quality System as it applies to software development. Do not put a Quality System into place that you can not follow. This is the cause of most audit problems.

I couldn't have said it better myself, though I think that focusing on the FDA may distract you from why you're creating software quality processes in the first place. The real purpose of having software design controls is to produce a high quality, user friendly, robust, and reliable system that meets the intended use of the device.  If your quality system does that, you won't have to worry about FDA audits.

Since Klocwork is a static analysis tool company I also want to point out a recent related article that's worth reading -- and trying to fully understand:

A Few Billion Lines of Code Later: Using Static Analysis to Find Bugs in the Real World

Note the user comment by Bjarne Stroustrup.

UPDATE (2/9/10): Here's another good code analysis article:

A Formal Methods-based verification approach to medical device software analysis