Archive for August, 2007

2007 Ultimate Developer And Power Users Tool List For Windows

Tuesday, August 28th, 2007

If you do any Windows development work and haven't seen it yet, this list is a must have:

Scott Hanselman's 2007 Ultimate Developer And Power Users Tool List For Windows

Software Defects and the FDA

Saturday, August 25th, 2007

I ran across this post: Why Making Software Companies Liable Will Not Improve Security. It's a rather long piece that discusses the liability of software makers for security breaches. In the middle of the article he talks about his experience working on FDA regulated medical device software. I think his depiction is a little harsh, but probably not that far off depending on the environment you're working in. The conclusion from his FDA experience is:

In short, I believe that any attempt to impose quality on software by inspection and rule books is doomed to failure.

I would say that there are no single set of rules that can ensure software quality (that's not quite 'doomed to failure', but may be close). I think the FDA "rule book" (as I briefly describe here) is a full product life cycle quality system that generally meets its intended purpose. It doesn't ensure that all medical device software is free from defect. Far from it. The regulations simply provide the FDA with a means for determining what the product requirements and design were and how well it was actually tested. It's up to the regulators to use that information for deciding whether a product meets their quality standards.

On a side note, the post talked about how in 1997 50% of FDA device recalls were due to design defects. The article Failure Modes in Medical Device Software analyzes software-only FDA recalls from 1983-1997 and is a good read on the breakdown of software defects. According to that article, only about 10% of the total FDA recalls (1994-1996) were software related. It would be interesting to know how that number has changed since.

Kernel Object Namespace and Vista

Monday, August 20th, 2007

Just a quick development note:

According to Kernel Object Namespaces objects can have one of three predefined prefixes -- 'Local\', 'Global\', or 'Session\'. For Win2K/XP I've always used the 'Local\' prefix, which works fine. My primary use is with a Mutex to determine that a single instance of an application is running (like here). I also use the Mutex from a system service to discover if a GUI application is available for messaging. When trying to run the some code on Vista I found that the 'Local\' namespace does not work when Mutex.OpenExisting() is called from a the system service which is owned by a different user (from the same user, it works fine). So it appears that the 'Local\' prefix in Vista has a different behavior for the client session namespace than it does in Win2K/XP.

I searched around for a solution, but was unable to find a definitive answer. I did find a post about the Private Object Namespace which alludes to Vista kernel changes, but that's all. Here's what I determined empirically:

[TABLE=2]

The NO entries in the table mean that the namespace did not work. So, it appears that in order to support all three Windows versions I'd have to use the 'Global\' namespace. This is not a good solution. Unless I find another way, I'll have to determine the OS version and select the appropriate namespace at runtime ('Session\' for Vista, 'Local\' for Win2K/XP).

Google, Microsoft, and Health

Wednesday, August 15th, 2007

I think the recent New York Time's article entitled Google and Microsoft Look to Change Health Care missed the bigger picture. The article talks about other Internet companies (like WebMD), but it does not make any mention of the Federal Government's involvement in this arena.

In particular is the Nationwide Health Information Network (NHIN) which was initiated by an executive order in April 2004:

The Nationwide Health Information Network (NHIN) is the critical portion of the health IT agenda intended to provide a secure, nationwide, interoperable health information infrastructure that will connect providers, consumers, and others involved in supporting health and healthcare. The NHIN will enable health information to follow the consumer, be available for clinical decision making, and support appropriate use of healthcare information beyond direct patient care so as to improve health.

At the end of May NHIN published four prototype architectures. The proposals are standards-based, use decentralized databases and services ('network of networks'), and try to incorporate existing healthcare information systems. The companies involved were Accenture, CSC/Connecting for Health, IBM, and Northrop Grumman.

It seems to me that Google and Microsoft are using their proprietary technologies to try to achieve the same goals as NHIN. One of the major differences of course is transparency. Everything that NHIN does is open to public scrutiny whereas GOOG/MSFT have their own market research programs and keep their strategies (for making money) close to the vest.

Besides ensuring privacy, I would argue that one of the key components for creating a successful NHIN is interoperability. Even with "standards" like HL7 and DICOM being available, IMHO the current state of the Electronic Health/Medical Records industry is total chaos. Just like GOOG/MSFT are creating their own islands of knowledge, there are a lot of other vendors (84 listed on Yahoo! Directory) doing the same. As a medical device developer trying to interface with customer EMR systems, we're faced with having to provide essentially unique solutions to (what seems like) just about every customer. If that's the reality down here in the trenches, a NHIN is most likely a very long way off.

In a related item, there are some screen shoots from the future Google Health service (codenamed "Weaver") here.

Update: Dr. Bill Crounse at the HealthBlog also has some thoughts about the NYT article: Doctor Google and Doctor Microsoft; if not them, who?

Developing a real-time data flow and control model with WCF

Saturday, August 11th, 2007

A Windows Communication Foundation (WCF) service is defined through its operations and data contracts. One of the major benefits of WCF is the ease with which a client can create and use these services through the automatically generated proxy classes. The service side is only half of the communications link though. Discovering the correct WCF configuration options that allow a solution to operate properly was not as easy as I thought it would be.

This post describes a specific WCF-based data control and streaming architecture. The primary goal of this service is to provide a continuous data stream (as buffers of short values) from a real-time data acquisition source. The client would then be able to display the data as it became available or store the data when directed by the user. In addition, the service allows the client to both get status information (Getters) and control certain attributes (Setters) of the underlying data source. This is illustrated here:

Real-time architecture

The DataBufferEvent is defined as a one-way callback and continuously delivers data to the client. The IsOneWay property is valid for any operation that does not have a return value and improves network performance by not requiring a return message. The Getters and Setters [for you Java folks, this has nothing to do with JavaBeans] can be called at any time. Changing a data source attribute with a Setter will probably affect the data stream, but it is the responsibility of the data source to ensure data integrity. The underlying transport binding must support duplex operation (e.g. wsDualHttpBinding or netTcpBinding) in order for this scenario to work.

Here is what an example (a sine wave generator) service interface looks like:

The service class is implemented as follows:

The InstanceContextMode.PerSession mode is appropriate for this type of interface. Even though there is probably only a single data source, you still want to allow multiple service session instances to provide data simultaneously to different clients. The data source would be responsible for managing the multiple data requesters.

With the service side complete, all the client needs is to do is create the proxy classes (with either Visual Studio or Svcutil), setup the DataBufferEvent callback and call the appropriate control functions. My first client was a Winform application to display the data stream. The problem I ran into is that even though the data callbacks worked properly, I would often see the control functions hang the application when they were invoked.

It took quite a bit of searching around before I found the solution, which is here. You can read the details about the SynchronizationContext issues, but this caused me to spin my wheels for several days. The upside is that in trying to diagnose the problem I learned how to use the Service Trace Viewer Tool (SvcTraceViewer.exe) and the Configuration Editor Tool (SvcConfigEditor.exe, which is in the VS2005 Tools menu).

So after adding the appropriate CallbackBehavior attributes, here are the important parts of the client that allow this WCF model to operate reliably:

The first take-away here is that WCF is a complex beast. The second is that even though it's easy to create proxy classes from a WCF service, you have to understand and take into account both sides of the communications link. It seems so obvious now!

That's it. This WCF component is just part of a larger project that I'm planning on submitting as an article (with source code) on CodeProject, someday. If you'd like the get a copy of the source before that, just let me know and I'll send you what I currently have.

Update: Proof of my first take-away: Callbacks, ConcurrencyMode and Windows Clients. Thanks Michele! 🙂

Microsoft Robots and Medicine

Sunday, August 5th, 2007

In this months IEEE Spectrum magazine there's an interesting article about Microsoft's efforts in robotics called Robots, Incorporated by Steven Cherry.

The article describes the team that created Microsoft Robotics Studio, how the group came to be, some of the software technologies, and an overview of the Microsoft's strategy in the Robotics marketplace.

What prompted this post is an example of how robotics might be used for medical purposes:

Imagine a robot helping a recovering heart-attack patient get some exercise by walking her down a hospital corridor, carrying her intravenous medicine bag, monitoring her heartbeat and other vital signs, and supporting her weight if she weakens.

Also, in the discussion about multi-threaded task management:

Or there might arise two unrelated but equally critical tasks, such as walking beside a hospital patient and simultaneously regulating the flow of her intravenous medications.

It's clear that these are just illustrative examples and there's no attempt to delve into the complexities of how to achieve these types of tasks. What I think is enlightening is that it provides examples of what the expectations are for robotics in medicine.

There are many research efforts in this area, but there's not really a lot of commercialization yet. There are numerous efforts in Robotic Surgery and robotic prosthetics (e.g. see iWalk) hold a lot of promise for improving lives. It's not exactly robotics, but the integration of an insulin pump with real-time continuous glucose monitoring for diabetes management (see the MiniMed device) can certainly be considered the application of "intelligent" technology.

I think that the expectations for the future use of robots for medical purposes are as realistic as any other potential use. There are some areas where the technological hurdles are very high, e.g. neural interfacing (see BrainGate), but many practical medical uses will have the same set of challenges as any other robotic application. Human safety will have to become a primary issue anytime a robot is interacting with people. Manufacturers of medical devices have the advantage that risk analysis and regulatory requirements are already part of their development process. Cost is certainly the other major challenge for the use of robots in both the consumer and medical markets. No matter how good the solution is, it must still be affordable.

SolutionZipper Updated

Friday, August 3rd, 2007

I've updated my SolutionZipper source and installer to version 1.3 on CodeProject. Here are the changes:

  • Fixed a bug that was causing SZ to fail during the "Handle external projects" phase.
  • Ignore VC++ Intellisense Database files, i.e. *.ncb.
  • Ignore hidden files and folders.

I originally wrote this last year as simply a convenience function. Even though I use a source code control system (Subversion) at work, I still need a quick way to snapshot and backup my personal projects at home.

I recently started a solution that included a C++ project and noticed some problems. First was that there was no need to backup the VC++ Intellisense database file. The second problem might be related to one of these:

  • Microsoft Visual Studio 2005 Professional Edition - ENU Service Pack 1 (KB926601)
  • Visual Studio 2005 extensions for .NET Framework 3.0 (WCF & WPF), November 2006 CTP
  • Microsoft ASP.NET 2.0 AJAX Extensions 1.0

I don't know which one caused the problem, but after one of these was installed VS2005 had project list items that were not file system based -- a project called <MiscFiles>? Anyway, this caused the search for external projects to fail.

There was a request to ignore Subversion (.svn) directories. This was a good idea so I just ignore all hidden directories and files. This also means that VS Solution User Option files (.suo) are not included in the zip file.