OTS/SOUP Software Validation Strategies

My last discussion of Off-The-Shelf software validation only considered the high-level regulatory requirements.  What I want to do now is dig deeper into the strategies for answering question #5:

How do you know it works?

This is the tough one. The other questions are important, but relative to #5, answering them is pretty easy.  How to answer this question (i.e. accomplish this validation) is the source of a lot of confusion.

There are many business and technical considerations that go into the decision to use OTS or SOUP software as part of a medical device. Articles and books are available that include guidance and general OTS validation approaches. e.g. Off-the-Shelf Software: A Broader Picture (warning PDF) is very informative in this regard:

  • Define business’ use of the system, ideally including use cases and explicit clarification of in-scope and out-of-scope functionality
  • Determine validation deliverables set based on system type, system risk, project scope, and degree of system modification
  • Review existing vendor system and validation documentation
  • Devise strategy for validation that leverages vendor documentation/systems as applicable
  • Create applicable system requirements specification and design documentation
  • Generate requirements-traceable validation protocol and execute validation
  • Put in place system use, administration, and maintenance procedures to ensure the system is used as intended and remains in a validated state

This is great stuff, but unfortunately it does not help you answer question #5 for a particular type of software. That's what I want to try to do here.

OTS really implies Commercial off-the-shelf (COTS) software. The "commercial" component is important because it presumes that the software in question is a purchased product (typically in a "shrink-wrapped" package) that is designed, developed, and supported by a real company.  You can presumably find out what design controls and quality systems are in place for the production of their software and incorporate these findings into your own OTS validation.  If not, then the product is essentially SOUP (keep reading).

Contrast OTS with Software of Unknown Provenance (SOUP).  It is very unlikely that you can determine how this software was developed, so it's up to you to validate that it does what it's supposed to do.  In some instances this may be legacy custom software, but these days it probably means the integration of an open source program or library into your product.

This following list is by no means complete. It is only meant to provide some typical software categories and the strategies used for validating them.  Some notes:

  • I've included a Hazard Analysis section in each category because the amount of validation necessary is dependent on the level of concern.
  • The example requirements are not comprehensive. I just wanted to give you a flavor for what is expected.
  • Always remember, requirements must be testable.  The test protocol has to include a pass/fail criteria for each requirement. This is QA 101, but is often forgotten.
  • I have not included any example test protocol steps or reports.  If you're going to continue reading, you probably don't need help in that area.

Operating Systems

Examples:

  • Windows XP SP3
  • Windows 7 32-bit and 64-bit
  • Red Hat Linux

Approach:

  1. Hazard Analysis: Do a full  assessment of the risks associated with each OS.
    • Pay particular attention to the hazards associated with device and device driver interactions.
    • List all hazard mitigations.
    • Provide a residual Level of Concern (LOC) assessment after mitigation -- hopefully this will be negligible.
    • If the residual LOC is major, then Special Documentation can still be provided to justify its use.
  2. Use your full product verification as proof that the OS meets the OTS requirements. This has validity since your product will probably only be using a small subset of the full capabilities of the OS.  All of the other functionality that the OS provides would be out of scope for your product.
  3. This means that a complete re-validation of your product is required for any OS updates.
  4. There is no test protocol or report with this approach. The OS is considered validated when the product verification has been successfully completed.

Compilers

Examples:

  • Visual Studio .NET 2010  (C# or C++)
Approach:
  1. Hazard Analysis:
    • For a vast majority of cases, I think it is safe to say that a compiler does not directly affect the functioning of the software or the integrity of the data.  What a program does (or doesn't do) depends on the source code, not on the compiled version of that code.
    • The compiler is also not responsible for faults that may occur in devices it controls. The application just needs to be written so that it handles these conditions properly.
    • For some embedded applications that use specialized hardware and an associated compiler, the above will not necessarily be true. All functionality of the compiler must be validated in these cases.
  2. For widely used compilers (like Microsoft products) full product verification can be used as proof of the OTS requirements.
  3. Validation of a new compiler version , e.g. upgrading from VS 2008 to VS 2010: Showing that the same code base compiles and all Unit Tests pass in both can be used as proof. This assumes of course that the old version was previously validated.
  4. The compiler is considered fit for use after the product verification has passed so there is also no test protocol or report in this case.

Integrated Libraries

Examples:

Approach:
  1. Hazard Analysis: Both of these open source libraries are integrated into the product software.  The impact on product functioning, in particular data integrity, must be fully assessed.
  2. You first list the requirements that you will be using. For example, typical logging functionality that might include:
    • The logging system shall be able to post an entry labeled as INFO in a text file .
    • The logging system shall be able to post an entry labeled as INFO in a LEVEL column of a SQL Server database.
    • ... same for ERROR, DEBUG, WARN, etc.
    • The logging system shall include time/date and other process information formatted as "YYYY-MM-DD HH:MM:SS..." for each log entry.
    • The logging system shall be able to log exceptions at all log levels, and include full stack traces.
  3. For database functionality, listing basic CRUD requirements plus other specialized needs can be done in the same way.
  4. I have found that the easiest way to test these kinds of requirements is to simply write unit tests that prove the library performs the desired functionality.  The unit tests are essentially the protocol and a report showing that all asserts have passed is a great artifact.

Version Control Systems

Examples:

Approach:
  1. Hazard Analysis: These are configuration management tools and are not part of the product. As such, the level of concern is generally low.
  2. As above,  you first list the specific functionality that you expect the VCS to perform. Here are some examples of the types of requirements that need to be tested:
    • The product shall be able to add a directory to a repository.
    • The product shall be able to add a file to a repository.
    • The product shall be able to update a file in a repository.
    • The product shall be able to retrieve the latest revision of files and directories.
    • The product shall be able to  branch a revision of files and directories.
    • The product shall be able to merge branched files and directories.
  3. You then write a protocol that tests each one. This would include detailed instructions on how to perform these operations along with the pass/fail criteria for each requirement.

Issue Tracking Tools

Examples:

Approach:
  1. Hazard Analysis: These tools are used for the management of the development project. Again, the level of concern is generally low.
  2. You only need to validate the functionality you intend to use.  The features that you don't use do not need to be tested.
  3. You simply need to test the specific functionality.  Some example requirements -- the roles, naming conventions, and workflow will of course depend on your organization and the tool being used:
    • A User shall be able to create a new issue.
    • A User shall be able to comment on an issue.
    • A Project Manager shall be able to assign an issue to a Developer.
    • A Developer shall be able change the state of an issue to 'ready for test'.
    • A Tester shall be able to change the state of an issue to 'verified'.
    • The tool shall be able to send e-mail notifications when an issue has been modified.
    • An Administrator shall be able to define a milestone.
  4. A protocol with detailed instructions and pass/fail criteria is executed and reported on.

Validation is a lot of work but is necessary to ensure that all of the tools and components used in the development of medical device software meet their intended functionality.

This entry was posted in FDA, Software Quality and tagged , . Bookmark the permalink.

8 Responses to OTS/SOUP Software Validation Strategies

  1. Pingback: Validation of Off-The-Shelf Software Development Tools | Bob on Medical Device Software

  2. Denis says:

    Hello Rob,

    The part on compiler validation: I would think since compiler is responsible for producing the final binary which runs on the device, a bug in the compiler that produces incorrect binary may have very serious implications.

    Thus what you wrote above “.. safe to say that a compiler does not directly affect the functioning of the software or the integrity of the data” sounds incorrect, as it may infact corrupt data or prevent the all device functioning correctly.
    And so compiler may in fact be responsible for device faults.

    I would say the part “.. some embedded applications that use specialized hardware and an associated compiler …” should be changed to something like “For any medical device classifed as high-risk…”.

    Would you disagree

  3. Dave says:

    Denis, I am not as concerned about the translation of source code to binary, because compiler companies do go through validation for that.

    However, where I do have a concern, is for use of source code from libraries included with these compilers. E.g. what is the effect of using GCC with it’s open-source libc, vs. the Visual Studio which also has a built-in library for libc? One is open-source, other is commercially developed, but if we don’t have the paperwork to prove the process, are both considered SOUP? Or is the first SOUP but the second COTS?

    Dave

  4. Sriram Karunagaran says:

    Hello Rob,

    Thanks for the wonderful post. I would like to ask you a question regarding SOUP. Suppose we are developing a ClassIIb medical device and wanted to use FreeRTOS instead of SafeRTOS. Will the product validation cover the necessary testing that is required for certification of the product? The SafeRTOS claim that we will need to submit product trail, test cases etc of the SOUP item for certification. Can you suggest a strategy?

    Regards,
    Sriram.K

  5. Andy Kula says:

    This is a very helpful article, thank-you.

    How far are we to go with respect to OTS productivity tools? I certainly understand validation of a compiler or ALM tool (visual studio online), but what about common productivity tools that are also used when making a medical device e.g. merge tools, Excel, PowerPoint, Paint, etc.?

    Where do we draw the line? How about CMake or DICOM viewers? Are there guidelines to help us judge whether a given tool should be included in the group of “official” OTS software.

    Thanks,
    -Andy

  6. seb says:

    Hello the article is interesting, but I have to say that’s totally disappointing. Lets say I’m developping DICOM viewer software for example.

    How’s my Git unable to commit a file to my repository (where I keep my code) can be a risk to a the heatlh of a human being ?

    You conclude:
    “Validation is a lot of work but is necessary to ensure that all of the tools and components used in the development of medical device software meet their intended functionality.”

    Frankly most of those considerations seem pointless to me and a big loss of time. If those considerations are intended to ensure the patient safety, how’s writing a detailed hazard analysis about Windows, Visual Studio, g++, stdlib, jira, git, … can possibly ensure the safety of the patient ?!

    In a DICOM Viewer the safety of the patient can only possibly rely on the image quality/compression, orientation of the image/patient, measures, image processing algorithms, screen quality, e.g: mostly technical elements related to the images. Visual studio or Git bug issues cannot have any impact on those elements…

    I get that the main point is to show the FDA : “hey look we have procedures to test JIRA, and Git. So we are serious buddies !”
    But seriously, should I also write a procedure to test each damn key on my keyboard ?!!!

  7. Pingback: Ots - Off-The-Shelf Software Use In Medical Devices | Fda

  8. Oliver Eidel says:

    Hey Bob,

    just found your blog, great post!

    It’s interesting to learn how other people handle the SOUP verification / validation part. My experience so far is that it often annoys developers a lot as they already decided on a certain SOUP and are now only retrospectively verifying it.

    I wonder, for open-source packages, would you consider running the existing test suite (if it exists) as sufficient verification? If you write your own unit tests you might end up testing something which was already covered by the author anyway.

    Have a nice day!

    Oliver

Leave a Reply

Your email address will not be published. Required fields are marked *