Google, Microsoft, and Health

I think the recent New York Time's article entitled Google and Microsoft Look to Change Health Care missed the bigger picture. The article talks about other Internet companies (like WebMD), but it does not make any mention of the Federal Government's involvement in this arena.

In particular is the Nationwide Health Information Network (NHIN) which was initiated by an executive order in April 2004:

The Nationwide Health Information Network (NHIN) is the critical portion of the health IT agenda intended to provide a secure, nationwide, interoperable health information infrastructure that will connect providers, consumers, and others involved in supporting health and healthcare. The NHIN will enable health information to follow the consumer, be available for clinical decision making, and support appropriate use of healthcare information beyond direct patient care so as to improve health.

At the end of May NHIN published four prototype architectures. The proposals are standards-based, use decentralized databases and services ('network of networks'), and try to incorporate existing healthcare information systems. The companies involved were Accenture, CSC/Connecting for Health, IBM, and Northrop Grumman.

It seems to me that Google and Microsoft are using their proprietary technologies to try to achieve the same goals as NHIN. One of the major differences of course is transparency. Everything that NHIN does is open to public scrutiny whereas GOOG/MSFT have their own market research programs and keep their strategies (for making money) close to the vest.

Besides ensuring privacy, I would argue that one of the key components for creating a successful NHIN is interoperability. Even with "standards" like HL7 and DICOM being available, IMHO the current state of the Electronic Health/Medical Records industry is total chaos. Just like GOOG/MSFT are creating their own islands of knowledge, there are a lot of other vendors (84 listed on Yahoo! Directory) doing the same. As a medical device developer trying to interface with customer EMR systems, we're faced with having to provide essentially unique solutions to (what seems like) just about every customer. If that's the reality down here in the trenches, a NHIN is most likely a very long way off.

In a related item, there are some screen shoots from the future Google Health service (codenamed "Weaver") here.

Update: Dr. Bill Crounse at the HealthBlog also has some thoughts about the NYT article: Doctor Google and Doctor Microsoft; if not them, who?

Posted in EMR, Google, Interoperability, Microsoft, PHR | 4 Comments

Developing a real-time data flow and control model with WCF

A Windows Communication Foundation (WCF) service is defined through its operations and data contracts. One of the major benefits of WCF is the ease with which a client can create and use these services through the automatically generated proxy classes. The service side is only half of the communications link though. Discovering the correct WCF configuration options that allow a solution to operate properly was not as easy as I thought it would be.

This post describes a specific WCF-based data control and streaming architecture. The primary goal of this service is to provide a continuous data stream (as buffers of short values) from a real-time data acquisition source. The client would then be able to display the data as it became available or store the data when directed by the user. In addition, the service allows the client to both get status information (Getters) and control certain attributes (Setters) of the underlying data source. This is illustrated here:

Real-time architecture

The DataBufferEvent is defined as a one-way callback and continuously delivers data to the client. The IsOneWay property is valid for any operation that does not have a return value and improves network performance by not requiring a return message. The Getters and Setters [for you Java folks, this has nothing to do with JavaBeans] can be called at any time. Changing a data source attribute with a Setter will probably affect the data stream, but it is the responsibility of the data source to ensure data integrity. The underlying transport binding must support duplex operation (e.g. wsDualHttpBinding or netTcpBinding) in order for this scenario to work.

Here is what an example (a sine wave generator) service interface looks like:

The service class is implemented as follows:

The InstanceContextMode.PerSession mode is appropriate for this type of interface. Even though there is probably only a single data source, you still want to allow multiple service session instances to provide data simultaneously to different clients. The data source would be responsible for managing the multiple data requesters.

With the service side complete, all the client needs is to do is create the proxy classes (with either Visual Studio or Svcutil), setup the DataBufferEvent callback and call the appropriate control functions. My first client was a Winform application to display the data stream. The problem I ran into is that even though the data callbacks worked properly, I would often see the control functions hang the application when they were invoked.

It took quite a bit of searching around before I found the solution, which is here. You can read the details about the SynchronizationContext issues, but this caused me to spin my wheels for several days. The upside is that in trying to diagnose the problem I learned how to use the Service Trace Viewer Tool (SvcTraceViewer.exe) and the Configuration Editor Tool (SvcConfigEditor.exe, which is in the VS2005 Tools menu).

So after adding the appropriate CallbackBehavior attributes, here are the important parts of the client that allow this WCF model to operate reliably:

The first take-away here is that WCF is a complex beast. The second is that even though it's easy to create proxy classes from a WCF service, you have to understand and take into account both sides of the communications link. It seems so obvious now!

That's it. This WCF component is just part of a larger project that I'm planning on submitting as an article (with source code) on CodeProject, someday. If you'd like the get a copy of the source before that, just let me know and I'll send you what I currently have.

Update: Proof of my first take-away: Callbacks, ConcurrencyMode and Windows Clients. Thanks Michele! 🙂

Posted in .NET, Networking, WCF | 9 Comments

Microsoft Robots and Medicine

In this months IEEE Spectrum magazine there's an interesting article about Microsoft's efforts in robotics called Robots, Incorporated by Steven Cherry.

The article describes the team that created Microsoft Robotics Studio, how the group came to be, some of the software technologies, and an overview of the Microsoft's strategy in the Robotics marketplace.

What prompted this post is an example of how robotics might be used for medical purposes:

Imagine a robot helping a recovering heart-attack patient get some exercise by walking her down a hospital corridor, carrying her intravenous medicine bag, monitoring her heartbeat and other vital signs, and supporting her weight if she weakens.

Also, in the discussion about multi-threaded task management:

Or there might arise two unrelated but equally critical tasks, such as walking beside a hospital patient and simultaneously regulating the flow of her intravenous medications.

It's clear that these are just illustrative examples and there's no attempt to delve into the complexities of how to achieve these types of tasks. What I think is enlightening is that it provides examples of what the expectations are for robotics in medicine.

There are many research efforts in this area, but there's not really a lot of commercialization yet. There are numerous efforts in Robotic Surgery and robotic prosthetics (e.g. see iWalk) hold a lot of promise for improving lives. It's not exactly robotics, but the integration of an insulin pump with real-time continuous glucose monitoring for diabetes management (see the MiniMed device) can certainly be considered the application of "intelligent" technology.

I think that the expectations for the future use of robots for medical purposes are as realistic as any other potential use. There are some areas where the technological hurdles are very high, e.g. neural interfacing (see BrainGate), but many practical medical uses will have the same set of challenges as any other robotic application. Human safety will have to become a primary issue anytime a robot is interacting with people. Manufacturers of medical devices have the advantage that risk analysis and regulatory requirements are already part of their development process. Cost is certainly the other major challenge for the use of robots in both the consumer and medical markets. No matter how good the solution is, it must still be affordable.

Posted in Medical Devices, Microsoft, Robotics | Leave a comment

SolutionZipper Updated

I've updated my SolutionZipper source and installer to version 1.3 on CodeProject. Here are the changes:

  • Fixed a bug that was causing SZ to fail during the "Handle external projects" phase.
  • Ignore VC++ Intellisense Database files, i.e. *.ncb.
  • Ignore hidden files and folders.

I originally wrote this last year as simply a convenience function. Even though I use a source code control system (Subversion) at work, I still need a quick way to snapshot and backup my personal projects at home.

I recently started a solution that included a C++ project and noticed some problems. First was that there was no need to backup the VC++ Intellisense database file. The second problem might be related to one of these:

  • Microsoft Visual Studio 2005 Professional Edition - ENU Service Pack 1 (KB926601)
  • Visual Studio 2005 extensions for .NET Framework 3.0 (WCF & WPF), November 2006 CTP
  • Microsoft ASP.NET 2.0 AJAX Extensions 1.0

I don't know which one caused the problem, but after one of these was installed VS2005 had project list items that were not file system based -- a project called <MiscFiles>? Anyway, this caused the search for external projects to fail.

There was a request to ignore Subversion (.svn) directories. This was a good idea so I just ignore all hidden directories and files. This also means that VS Solution User Option files (.suo) are not included in the zip file.

Posted in .NET, Visual Studio | 2 Comments

New medical devices and technologies

There's a lot going on and keeping track of the latest developments can be a challenge. Here are a few sites that I've found to be informative and up-to-date:

MedGadget.com
Medical Devices News
Medical Device Link
DoctorsGadgets.com

If you are aware of other good sources, please let me know.

With all of the Apple iPhone hype lately, the Medical Images on an iPhone post caught my eye. Privacy and HIPAA concerns don't worry me nearly as much as the thought of a Radiologist reading an x-ray while he's driving to work. Seriously though, improvements in both display resolution and user interface capabilities have come a long way. Apple is not the first to provide this type of functionality. PDA-based medical record applications and image viewers have been around for a long time.

Medical Images on iPhone

Posted in Medical Devices | 1 Comment

Update: Agile development in a FDA regulated setting

I contacted Frank Jacquette regarding my previous port on this subject (Agile development in a FDA regulated setting). His experience using Agile methodologies for pure software medical device projects does not correspond with my conclusion regarding cost effectiveness and regulatory risks. Frank said:

It has been a long while since I wrote that article, but we've applied the same approach to some fairly significant systems and they've all come in on time and within budget.

He does agree that the regulatory risk is a legitimate concern, but their experience with clients and regulators has always been positive.

I want to thank Frank for so graciously responding to my inquiry.

He also pointed me to a presentation called Integrating lightweight software processes into a regulated environment (warning: PDF) by Adrian Barnes that I had not seen before. This is a far more detailed look at possible solutions for bridging the gap between "Agile Processes" and "Formal Processes". The subject progression and graphics are very well done. It's worth a careful look-through. I'll let you be the judge, but I think Adrian's conclusions have the same level of skepticism as mine. I broadly addressed cost effectiveness whereas he specifically deals with risk factors for his bridge solution. He has even less faith on the regulatory risk side: "A brave man would try to convince the FDA that Agile is OK".

It's always good to have multiple points-of-view on a subject.

Posted in Agile, FDA | 3 Comments

First Look: Microsoft Health Common User Interface (CUI)

My initial impression is that the current implementation of the Microsoft Health CUI (v1.0.114.000) is strong on depth and weak on breadth. Because of the limited number of components available this software is too early in its implementation of be of much use in a real product. For example, the first thing I would need is a 'PatientGrid' component, which doesn't exist yet. This is just the first CTP, so missing features are be to expected.

Here are the component lists for WinForms and Web applications:

CUI WinForm Components CUI Web Components

The design guidance documents are the most impressive aspect of this project. Each control has its own complete document that includes sections like 'How to Use the Design Guidance' and 'How Not to Use the Design Guidance'. The higher level terminology and accessibility guidance documents are equally as comprehensive. As a software developer that has had to work from ambiguous requirements specifications, nothing is left to the imagination here. The requirements for some components (e.g. AddressLabel) are written to UK specifications, but that's be expected since CUI is being developed there.

The Visual Studio integration is good and the source code for the individual components appear to be well constructed and documented.

The CUI Roadmap isn't very specific, but I like the design guidance driven approach. All of the up-front design work makes me think of my previous post on Agile development. The CUI Delivery Lifecycle is described as iterative, but I doubt it's actually being developed using one of the Agile methodologies. In any case, I'll continue to watch the progress of this project and look forward to future releases. It could be my excuse to actually use (instead of just playing with) WPF someday!

Posted in .NET, GUI, Microsoft | 1 Comment

Microsoft Health Common User Interface (CUI)

This looks like an interesting initiative. Links (originally found here):

Microsoft Health Common User Interface (CUI)

Controls and Library from CodePlex

From the CodePlex site:

The Toolkit controls developed for this release conform to the recommendations contained in the Design Guidance documents. The Toolkit is a set of .NET 2.0 controls that help Independent Software Vendors (ISVs) build safe, consistent user interfaces for healthcare applications. The Toolkit controls have been created for use in both ASP.NET AJAX applications and WinForms.NET. The web versions of the controls are based on the ASP.NET AJAX Toolkit.

I'll take some time and investigate the controls and library and how well it integrates into an existing .NET 2.0 WinForms application.

First impression: The CodePlex download is 62MB!!

I'll let you know what I find.

Posted in .NET, Microsoft | 1 Comment

Agile development in a FDA regulated setting

I ran across an interesting Agile v. FDA discussion the other day. For those that are not familiar with what a FDA regulated product means, I'll give a brief overview.

In order to market and sell a medical device in this country -- OUS (outside US) medical device regulations are different -- you must have FDA approval. Most of the time, this involves a FDA Premarket Notification 510(k) submission. Your company must be registered with the FDA and is subject to periodic on-site visits by FDA inspectors to audit your quality system records. There are two important points here:

  1. Getting approval: In order to sell your device you not only have to prove safety and effectiveness, but you also have to demonstrate that the design and development of the device -- including the software -- follow Quality System Regulations. The Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices details these requirements.
  2. Keeping approval: After you receive 510(k) approval the FDA can pull your device off the market (the dreaded "recall") at any time due to complaints or unsatisfactory audit results.

What this means is that your on-going software development process must adhere to a well defined quality system process. As noted in the discussion, the FDA guidance does not dictate that a particular process must be used. The quality system process itself is designed by you -- actually, your entire company -- and simply needs to ensure that you are designing and building a quality product. The difficult part is that you have to be able to prove to the outside world that you have actually followed that process.

I've spent just about my entire career developing software for devices that were under FDA regulatory control. In the old days (pre ~1996) the FDA did not have a good concept of the software development process as part of it's quality system regulations and inspectors did not usually scrutinize software design and development. Nowadays, the FDA has a much clearer understanding of the software development process. It's that understanding that is the one of the central issues with respect to adopting an Agile development process for software in medical devices.

Let's first look at the FDA Good Manufacturing Practice (GMP - Quality System Regulation) requirements. It's Subpart C--Design Control (§ 820.30) that's of primary interest. Here's the outline:

  1. General
  2. Design and development planning
  3. Design input (specifications)
  4. Design output (coding)
  5. Design review
  6. Design verification (Was the product built right?)
  7. Design validation (Was the right product built?)
  8. Design transfer
  9. Design changes
  10. Design history file

The critical issue for the software development process is that each of the items 3-7 require a formal review and approval step. This is the reason why most companies that develop medical device software have chosen to use a quality system process that follows the Waterfall model for their development.

Waterfall

This sequential approach is a natural fit for allowing you to review and document each step in the process.

Now let's look at the Agile process. One of the better descriptions of the Agile development process is 'The New Methodology' by Martin Fowler. There a number of flavors of Agile (SCRUM, Extreme Programming (XP), etc.) that all try to encompass the Agile Manifesto. One advantage that the Agile process has over the Waterfall approach is it's ability to adapt to the unpredictability of requirements and changing customer needs. This is handled through the use of Iterations. From the Martin Fowler paper:

The key to iterative development is to frequently produce working versions of the final system that have a subset of the required features. These working systems are short on functionality, but should otherwise be faithful to the demands of the final system. They should be fully integrated and as carefully tested as a final delivery.

The purpose of the iterative development is to facilitate requirements changes between each iteration. Agile Methodologies in a Validated Setting by Frank Jacquette proposes some steps to accomplish the use of iterative development in a FDA regulated environment.

Let's assume that an Agile development process would be able to produce higher quality medical device software and that because of the customer focus of this process the resultant product would better meet market needs. Even with these assumptions, I think there are two major issues that need to be addressed:

  1. I just don't see how a cost-effective quality system process implementation can be accomplished. Even if (and it's a big if) the actual overall software development time was shorter, the extra costs incurred by the additional process controls, documentation and testing required for each iteration would far exceed those savings.
  2. The last point Mr. Jacquette makes is the other issue:

    Take the time to explain agile methodologies to your regulatory specialists and work hard to gain their understanding and agreement, because the burden of proof will be on them and you if an FDA auditor decides to take a peek under the hood.

    This seems like a huge risk to me.

The first question becomes: Is the possibility of an improved product (features and quality) worth the additional cost? I suppose if you had a development team with extensive Agile experience you could make the argument that it would be worth it. If not, I think the ROI (return on investment) analysis would be a difficult one to make.

The second question is a big unknown, which is why I think the risk is high. My experience with FDA auditors is generally good. They are professionals that are focused on getting a very specific job done. Since their interest is the entire quality system, the typical audit is a whirlwind affair as it is. The amount of time spent on design control (auditing the Design History File) is usually minimal. Even if you had received 510(k) approval with an Agile design control process, having to take the time to explain to an on-site FDA auditor (who in all likelihood has never seen your 510(k)) a methodology they probably have never even heard of is reason to worry.

Conclusion:

It seems to me that Agile methodologies have a long way to go before we see them commonly used in medical device software development. I've searched around and have found nothing to make me think that there is even a trend in this direction. Maybe it's that Agile processes are just too new. They seem popular as a presentation topic (I've been to several), but I wonder how prevalent Agile is even in mainstream software development?

If you are (or have ever been) part of an Agile development team for a FDA regulated product I'd love to here about your experiences and how you were able to resolve the types of issues presented here.

Thanks!

Posted in Agile, FDA | 18 Comments

Selecting an ORM for a .NET project: A real-world tale.

This is a story that I'm sure is being written over and over again by software designers like myself that need to develop relatively complex applications in a short amount of time. The only realistic way to accomplish this is to bite off major chunks of functionality by leveraging the work of others. Reinventing the wheel, especially wheels you know hardly anything about, can't be justified. Because of that many of the architectural decisions you have to make revolve around figuring out how to get all of those 'frameworks' and libraries and GUI components to work together. But I digress...

This post simply tells the tale of my journey though picking a framework for a project, how the landscape has changed over time, and what that means for the future. In case you don't want to read on, the ending is happy -- I think.

For any project that requires the storage and orderly retrieval of data there's pretty much no way to get around using a database. Also, if you have a requirement (like I did) that the user is able to select from a variety of SQL back-end databases then the use of some sort of Object-Relational Mapping (ORM) framework software is the only sane way to implement your application.

I'm not going to get into the details of ORM software or try to compare the features of the vast array of frameworks available. One current list of object-relational mapping software includes 34 entries for .NET (and another 27 for Java). One problem with making a selection is that many of these frameworks are constantly evolving and are thus moving targets.

On a side note, Microsoft has squarely thrown their hat into the ORM ring with the introduction of Language Integrated Query (LINQ) which is scheduled to be released with the 'Orcas' version of Visual Studio 2008 and .NET Framework 3.5 (see a summary here and a great series of articles on ScottGu's Blog). LINQ won't be released until next year so it couldn't even be considered for my project.

So, the question is: If you're developing a modest .NET application, how do you go about selecting an ORM framework? I'll walk you through the selection process I used.

General filtering criteria:

  1. Non-commercial: I had no money to spend. This cut the possibilities about in half.
  2. Open source: Even though I didn't want to touch the implementation, it was critical for long-term security that I had all the source code and build tools so it could be modified if needed.
  3. SQL back-end selection: The framework needed to support at least four of the most popular SQL databases. I wanted the user to be able to select local and server-based databases that were both free (MySQL and SQLite3) and commercially available (Microsoft Access and MS-SQL Server).
  4. Ease of use: The learning curve for some frameworks is steep. I wanted something that was easy to learn and to get up and running quickly.
  5. Documentation and example code: Having a well documented API is important, but it's the example code and tutorials that teach you how to use the framework.
  6. Support: This was the most subjective piece of the puzzle. The key is to get a sense of the level of developer and community involvement in the project. This comes from looking at the release frequency and history (in particular, look at the release notes and the number and type of issues resolved) and the level of Forum and/or Wiki activity. It's pretty easy to detect a dead project. Foreshadow: Just because it's an active project now doesn't mean it won't die in the near future!

To make a long story short, I ended up evaluating two open source ORMs: NHibernate for .NET and Gentle.Net. As noted above, I did this evaluation about a year ago (Summer 2006) and things have changed since since then. In any case, at the time I felt that the two were comparable for the functionality I needed which included just simple class to table mapping. The two major competing factors were ease of use (#4) and support (#6). NHibernate was clearly the more active project, but the Gentle.Net developers (in particular, Morten Mertner) were responsive to questions and quickly resolved issues that came up. I found Gentle.Net much easier to get up and running. In particular, in NHibernate you need to create a matching mapping file for every database class. This was daunting at the start and I felt like the split-file definitions would create a long-term maintenance problem. With Gentle.Net, I was able to use MyGeneration to create the .NET class which included the database field decorators, and that was it. Also, RE: #4 the NHibernate core is a 25MB download while Gentle.Net is less than 7MB. This is not only telling of the relative complexity of the frameworks, but it also means that there would be significantly more baggage that I would have to carry around with NHibernate -- and install on customer systems.

Based on that analysis it shouldn't be a surprise that I picked Gentle.Net. One year later I'm still happy with that decision. Gentle.Net continues to meet my needs and will for the foreseeable future.

OK, I'm happy, so where's the problem? The problem is that it appears development has stopped on Gentle.Net. I'm using the .NET 1.1 assemblies (1.2.9+SVN) with my .NET 2.0 application. The features in the next generation Gentle.Net (2.0) development looks promising, but at this point it doesn't look like it will ever be released.

In retrospect, I suppose that even with my concerns, selecting NHibernate might have been the better long-term choice. I think it was primarily the ease of the initial Gentle.Net implementation that had the biggest sway on my decision. Does this mean I made a wrong decision? You may think so, but I don't. There may be some cognitive dissonance going on here, but I did what I consider a reasonable evaluation and analysis and selected the best solution for my needs under the circumstances.

So here we are today. At some point in the future I will probably need to update or add a new SQL database that Gentle.Net doesn't currently support. I'll have to do something. Here are my options:

  • Add the needed functionality to the existing Gentle.Net code base. This might not be unreasonable to do, but it's just putting off the inevitable.
  • Re-evaluate current ORM frameworks and select one like I did last year. Déjà vu all over again... Microsoft side note: I can also wait for .NET Framework 3.5 to be released and use LINQ to SQL. My concern with LINQ is support for non-Microsoft databases. There's already some effort to develop LINQ providers (MySQL, etc. see here) but how long will it take to get reliable support for something like SQLite3 (or 4 by then) if it happens at all?
  • Justify the expense and purchase a supported product.

I think my little tale is pretty typical. Any developer that's using open source components is, or will be someday, in the exact same situation as I am. Doing good software design to mitigate these types of issues is why they pay us the big bucks! 🙂

UPDATE (6/23/08): Here's a good take on selecting an ORM: Objectively evaluating O/R Mappers (or how to make it easy to dump NHibernate).

Posted in .NET, LINQ, ORM | 10 Comments