Archive for the ‘Networking’ Category

Medical Device Software on Shared Computers

Monday, September 7th, 2009

ECG PCThe issues raised in Tim's post Running Medical Device Software on Shared Computers literally opens Pandora's box. Installation of medical device software on general purpose computers is an intractable problem.

It's very similar to the complications associated with Networked Medical Devices, except worse.  An FDA approved device in a changing network environment is one thing.  Software that controls a medical device on a PC that is open for the user to install operating system upgrades, applications, and other device drivers is a recipe for disaster.

I don't care how obsessed a vendor is, there is no way for a medical device manufacturer to verify proper operation for all possible hardware and software environments.

With today's PC architectures, the highest risk area is at the device driver level. Running multiple devices that require even modest I/O bandwidth can cause interference that could result in lost or significantly delayed data. This is especially true with Windows XP or Vista that do not inherently provide any real-time data processing capabilities.

I think the best strategy is to provide stand-alone medical devices that have no dependencies on the PC hardware and software that may be available for down-stream data processing and display. This not only reduces compatibility risk, but it can also address mobility issues. With miniaturization and wireless capabilities, the medical device can now travel with the patient.

Also, with Pandora's box safely closed, solving the networked medical device issues suddenly feels manageable.

UPDATE (9/15/09): Here's an interesting take on this subject from the consumer perspective: Should Medical Devices Multitask?

Networked Medical Devices

Saturday, December 20th, 2008

If you work with networked medical devices, Tim's post Medical Device Networks Trouble Industry is a must-read.

In order to better illustrate the bigger picture I thought this diagram might help:

netmeddev

This summarizes the relationship between the major players involved with integrating medical devices into an enterprise network and highlights some points I think are important.

  1. Only medical device manufacturers have to be concerned with the FDA regulatory aspects of placing computing and networking components into a medical environment. I've previously discussed some of the regulatory and verification/validation issues with Connecting Computers to FDA Regulated Medical Devices.
  2. All of the players -- hospital IT, medical devices, and IT/EMR software vendors -- deal with the same commercially available hardware and software components. This is simply due to the economy of scale.  The medical industry isn't large enough create the quantities necessary to drive the cost out of most of these devices. We have to depend on the broader high-volume commercial marketplace  in order to reduce cost.
  3. The medical device industry is involved in standards development, but at the end of the day its the broader market adoption that drives down the cost for everyone (see point #2).  I think this is one of the main reasons why "the days of private medical device networks as we know them are over."
  4. FDA guidance and regulatory efforts in this area will always be in catch-up mode. As the technology and trends change they will be forced to evaluate the impact on patient safety after the fact. This is already happening -- as Tim points out (emphasis mine):

The bottom line here is that we can’t all look to the FDA to solve these issues that are the consequence of putting medical device systems on enterprise networks - when you do this, your enterprise network becomes part of a medical device.

Medical devices have been added to enterprise networks for years, yet IEC 80001 and the Medical Device Data Systems rule are still just drafts.

Some other thoughts:

  • Private Medical Device Networks: Wireless networks are more often private. For wired networks "logically separate private networks through the use of network switches and routers" is more the norm. Since Ethernet took over in the mid 90s (anyone remember token ring?) most hospitals have not allowed private in-wall wiring installations.
  • Enterprise Networks: One of the major challenges is just getting your private "logical" system installed on the hospital infrastructure. In addition to reliability, compatibility, routing, and bandwidth are just a few of the issues.  One of the troubling aspects of this from a regulatory point of view is that there is no way for a medical device manufacturer to test all of the possible configurations that may arise in the field. The sustaining engineering and network variability issues are related problems.
  • Hospital IT Culture: This is an issue that I have seen first-hand.  A previous medical device I worked on used an embedded POSIX compliant UNIX variant. We ran into several hospital IT departments that refused to approve the medical device purchase because their policy would only allow computers running certain versions of the Microsoft OS on their network. This happened quite a few years ago. I can only hope that the integration philosophies of hospital IT departments have become more enlightened since then.

UPDATE  (1/15/09): Here's another good article on this subject:  Smoothing the Rocky Path of Interconnected Medical Devices.

Connecting Computers to FDA Regulated Medical Devices

Wednesday, June 18th, 2008

Pete Gordon asked a couple of questions regarding FDA regulations for Internet-based reporting software that interface with medical devices. The questions are essentially:

  1. How much documentation (SRS, SDS, Test Plan) is required and at what stage can you provide the documentation?
  2. How does the FDA view SaaS architectures?

The type of software you're talking about has no real FDA regulatory over site. The FDA has recently proposed new rules for connectivity software. I've commented on the MDDS rules, but Tim has a complete overview here: FDA Issues New MDDS Rule. As Tim notes, if the FDA puts the MDDS rules into place and becomes more aggressive about regulation, many software vendors that provide medical devices interfaces will be required to submit 510(k) premarket approvals.

Dealing with the safety and effectiveness of medical devices in complex networked environments is on the horizon. IEC 80001 (and here) is a proposed process for applying risk management to enterprise networks incorporating medical devices.  My mantra: High quality software and well tested systems will always be the best way to mitigate risk.

Until something changes, the answer to question #1 is that if your software is not a medical device, you don't need to even deal with the FDA. The answer to question #2 is the same. The FDA doesn't know anything about SaaS architectures unless it's submitted as part of a medical device 510(k).

I thought I'd take a more detailed look at the architecture we're talking about so we can explore some of the issues that need to be addressed when implementing this type of functionality.

mdds2.jpg

This is a simplified view of the way medical devices typically interface to the outside world. The Communications Server transmits and receives data from one or more medical devices via a proprietary protocol over whatever media the device supports (e.g. TCP/IP, USB, RS-232, etc.).

In addition to having local storage for test data, the server could pass data directly to an EMR system via HL7 or provide reporting services via HTTP to a Web client.

There are many other useful functions that external software systems can provide. By definition though, a MDDS does not do any real-time patient monitoring or alarm generation.

Now let's look at what needs to be controlled and verified under these circumstances.

  1. Communications interaction with proper medical device operation.
  2. Device communications protocol and security.
  3. Server database storage and retrieval.
  4. Server security and user authentication.
  5. Client/server protocol and security.
  6. Client data transformation and presentation to the user (including printed reports).
  7. Data export to others formats (XML, CSV, etc.).
  8. Client HIPAA requirements.

Not only is the list long, but these systems involve the combination of custom written software (in multiple languages), multiple operating systems, configurable off-the-shelf software applications, and integrated commercial and open source libraries and frameworks. Also, all testing tools (hardware and software) must be fully validated.

One of the more daunting verification tasks is identifying all of the possible paths that data can take as it flows from one system to the next. Once identified, each path must be tested for data accuracy and integrity as it's reformatted for different purposes, communications reliability, and security. Even a modest one-way store-and-forward system can end up with a hundred or more unique data paths.

A full set of requirements, specifications, and verification and validation test plans and procedures would need to be in place and fully executed for all of this functionality in order to satisfy the FDA Class II GMP requirements. This means that all of the software and systems must be complete and under revision control. There is no "implementation independent" scenario that will meet the GMP requirements.

It's no wonder that most MDDS vendors (like EMR companies) don't want to have to deal with this. Even for companies that already have good software quality practices in place, raising the bar up to meet FDA quality compliance standards would still be a significant organizational commitment and investment.

HL7 Interfacing: The last mile is the longest.

Saturday, December 15th, 2007

Tim Gee mentions the Mirth Project as a cost effective solution for RHIOs (regional health information organizations). In particular, he notes that the WebReach appliance is "ready to go" hardware and software.

I've recently started looking at HL7 interface engines for providing our ICG electronic records to customer EMR systems. I've mainly been evaluating Mirth and NeoIntegrate from Neotool.

One of the Neotool bullet points about HL7 V2 states:

Not “Plug and Play” - it provides 80 percent of the interface and a framework to negotiate the remaining 20 percent on an interface-by-interface basis

Since HL7 V2 is the most widely adopted interface in the US, that last 20% can be a significant challenge. This is one of the primary purposes for HL7 integration tools like Mirth and NeoIntegrate -- to make it as easy as possible to complete that last mile.

If you look closely at the Mirth appliance literature you'll see this in the Support section:

For customers requiring assistance with channel development, WebReach consulting and engineering services are available, and any custom work performed by WebReach can be added to your annual support agreement.

They're providing a turn-key hardware and integration engine system, but you either have to create the custom interfaces yourself or hire them (or someone else) to do it for you.

<AnalogyAlert>
This means that you have bought the hammer and identified the nail(s) to pound in. All you need to do now is find and hire a carpenter to complete the job.
</AnalogyAlert>

This really shouldn't be that surprising though. Custom engineering and support is the business model for the WebReach Mirth project and I'm sure a large revenue generator for Neotool.

There is certainly great value in being able to purchase a preconfigured and supported HL7 interface appliance. Just be aware that it's not quite ready to go.

Update 17-Dec-07:

If anyone has experience using HL7 integration engines that they'd like to share, I'd love to here for you (preferably through the comments so they're shared, but private mail is also fine). In particular, I know there are a number of competing offerings to the ones mentioned in this post, and it would be good to know if they are worth evaluating. Thanks!

Developing a real-time data flow and control model with WCF

Saturday, August 11th, 2007

A Windows Communication Foundation (WCF) service is defined through its operations and data contracts. One of the major benefits of WCF is the ease with which a client can create and use these services through the automatically generated proxy classes. The service side is only half of the communications link though. Discovering the correct WCF configuration options that allow a solution to operate properly was not as easy as I thought it would be.

This post describes a specific WCF-based data control and streaming architecture. The primary goal of this service is to provide a continuous data stream (as buffers of short values) from a real-time data acquisition source. The client would then be able to display the data as it became available or store the data when directed by the user. In addition, the service allows the client to both get status information (Getters) and control certain attributes (Setters) of the underlying data source. This is illustrated here:

Real-time architecture

The DataBufferEvent is defined as a one-way callback and continuously delivers data to the client. The IsOneWay property is valid for any operation that does not have a return value and improves network performance by not requiring a return message. The Getters and Setters [for you Java folks, this has nothing to do with JavaBeans] can be called at any time. Changing a data source attribute with a Setter will probably affect the data stream, but it is the responsibility of the data source to ensure data integrity. The underlying transport binding must support duplex operation (e.g. wsDualHttpBinding or netTcpBinding) in order for this scenario to work.

Here is what an example (a sine wave generator) service interface looks like:

The service class is implemented as follows:

The InstanceContextMode.PerSession mode is appropriate for this type of interface. Even though there is probably only a single data source, you still want to allow multiple service session instances to provide data simultaneously to different clients. The data source would be responsible for managing the multiple data requesters.

With the service side complete, all the client needs is to do is create the proxy classes (with either Visual Studio or Svcutil), setup the DataBufferEvent callback and call the appropriate control functions. My first client was a Winform application to display the data stream. The problem I ran into is that even though the data callbacks worked properly, I would often see the control functions hang the application when they were invoked.

It took quite a bit of searching around before I found the solution, which is here. You can read the details about the SynchronizationContext issues, but this caused me to spin my wheels for several days. The upside is that in trying to diagnose the problem I learned how to use the Service Trace Viewer Tool (SvcTraceViewer.exe) and the Configuration Editor Tool (SvcConfigEditor.exe, which is in the VS2005 Tools menu).

So after adding the appropriate CallbackBehavior attributes, here are the important parts of the client that allow this WCF model to operate reliably:

The first take-away here is that WCF is a complex beast. The second is that even though it's easy to create proxy classes from a WCF service, you have to understand and take into account both sides of the communications link. It seems so obvious now!

That's it. This WCF component is just part of a larger project that I'm planning on submitting as an article (with source code) on CodeProject, someday. If you'd like the get a copy of the source before that, just let me know and I'll send you what I currently have.

Update: Proof of my first take-away: Callbacks, ConcurrencyMode and Windows Clients. Thanks Michele! 🙂