Archive for July, 2008

Top 10 Concepts That Every Software Engineer Should Know

Tuesday, July 29th, 2008

Check out Top 10 Concepts That Every Software Engineer Should Know. The key point here is concepts. These are (arguably) part of the foundation that all good software engineers should have:

  1. Interfaces
  2. Conventions and Templates
  3. Layering
  4. Algorithmic Complexity
  5. Hashing
  6. Caching
  7. Concurrency
  8. Cloud Computing
  9. Security
  10. Relational Databases

From a practical point of view, this still comes down to a Selecting Books About Programming issue. This list is just more focused on specific software technologies and techniques.

So many books, so little time...

UPDATE (7/30/08):

Here's a career related post with some good advice: Becoming a Better Developer.  Learn a New Technology Each Month (#5) seems like a little much. I guess it depends on what your definition of "learn" is.

Digital Devices and EHRs: the ROI

Monday, July 28th, 2008

Here's a piece that provides a quick analysis of the ROI of having vital signs and ECG devices connected to an EMR:

Digital Devices and EHRs -- Perfect Together

The right way to do it:

Electronic transfer of patient information

Electronic transfer of BP and heart rate

Electronic recording of test

MD views results from anywhere

DONE -- half the steps, half the time

Ah, if it were only that easy.

Upgrade to WordPress 2.6

Saturday, July 26th, 2008

It's been over a year since my original WordPress installation.  I started with 2.4, and ignored the 2.5 release.  Since 2.6 was recently released I thought it was time to take the leap.

I followed the upgrade instructions closely. Here's a summary of the experience:

  1. The backup and upgrade process was straight forward. The instructions on which files to delete and not delete could have been clearer.
  2. Huge gotcha: Lost category and link descriptions. The process for restoring these (WordPress 2.6 Upgrade - Fix Missing Categories) was awful, but at least it worked. If I had known about this before hand I would have waited to do the upgrade until this problem was fixed.
  3. All of the plugins I use were upgraded and re-installed without a problem. I like the new integrated upgrade capability.
  4. I don't use a custom theme so I did not have to deal with any display issues. The new wp-syntax plugin improved code formatting looks great.
  5. The only customization I had to do was add my LinkedIn link to SideBar.php and the Goggle Analytics script to Footer.php.
  6. For some reason the sub-domain link to the blog (http://blog.bobonmedicaldevicesoftware.com) goes directly to the main domain http://bobonmedicaldevicesoftware.com.  I'm sure that my root .htaccess file didn't change so I don't understand why this is now happening. I haven't been able to find a solution yet, so in the mean time I've just re-directed the main page back to the blog.

That's it. Hopefully the new 2.6 features will be worth the effort.

UPDATE (8/15/08):

Upgraded to WP 2.6.1 using WordPress Automatic Upgrade 1.2.1.   WPAU automates WP and database backup, puts the site into maintenance mode, disables all plugins, does the WP new version upload and unpacking, database upgrade, and re-enables plugins.  This update wasn't really necessary for me, but I wanted to walk through the automatic upgrade process just to see how it went.  Everything worked fine, which was expected for a minor release.

I don't know if problem #2 was resolved for 2.6.1 upgrades from older WP versions.  I did a quick read-through of the fixed bugs, but that one didn't jump out at me.

UPDATE (12/13/08):

Upgraded from WordPress 2.6.5 to 2.7 with WPAU. Worked great. The new 2.7 admin interface is nice and the built-in updates will hopefully work as well as WPAU.

I also just noticed that my problem #6 -- sub-domain link to the blog -- has been fixed in WordPress 2.7. WooHoo!

Loading Individual Designer Default Values into Visual Studio .NET Settings

Wednesday, July 23rd, 2008

The VS.NET Settings designer creates an ApplicationSettingsBase Settings class in Settings.Designer.cs (and optionally Settings.cs).  The default values from the designer are saved in app.config and are loaded into the Settings.Default singleton at runtime.

So, now you have a button on a properties page that says 'Reset to Factory Defaults' where you want to reload the designer default values back into your properties.  If you want do this for all property values you can just use Settings.Default.Reset(). But what if you only want to reset a subset of your properties? 

There may be a better way to do this, but I couldn't find one.  The following code does the job and will hopefully save someone from having to reinvent this wheel.

The ResetToFactoryDefaults method takes a collection of SettingsPropertys and uses the DefaultValue string to reset the value. Most value types (string, int, bool, etc.) worked with the TypeConverter, but the StringCollection class is not supported so the XML string has to be deserialized manually.

These helper methods show how just selected (and all) properties can be reset.

This code was developed with VS 2005, but should also work in VS 2008.

Interoperability: Google Protocol Buffers vs. XML

Monday, July 14th, 2008

Google recently open sourced Protocol Buffers: Google's Data Interchange Format (documentation, code download). What are Protocol Buffers?

Protocol buffers are a flexible, efficient, automated mechanism for serializing structured data – think XML, but smaller, faster, and simpler.

The documentation is complete and worth a quick read through. A complete analysis of PB vs. XML can be found here:  So You Say You Want to Kill XML.....

As discussed, one of the biggest drawbacks for us .NET developers is that there is no support for the  .NET platform. That aside, all of the issues examined are at the crux of why interoperability is so difficult. Here are some key points from the Neward post:

  1. The advantage to the XML approach, of course, is that it provides a degree of flexibility; the advantage of the Protocol Buffer approach is that the code to produce and consume the elements can be much simpler, and therefore, faster.
  2. The Protocol Buffer scheme assumes working with a stream-based (which usually means file-based) storage style for when Protocol Buffers are used as a storage mechanism. ... This gets us into the long and involved discussion around object databases.
  3. Anything that relies on a shared definition file that is used for code-generation purposes, what I often call The Myth of the One True Schema. Assuming a developer creates a working .proto/.idl/.wsdl definition, and two companies agree on it, what happens when one side wants to evolve or change that definition? Who gets to decide the evolutionary progress of that file?

Anyone that has considered using a "standard" has had to grapple with these types of issues. All standards gain their generality by having to trade-off for something (speed, size, etc.). This is why most developers choose to build proprietary systems that meet their specific internal needs. For internal purposes, there's generally not a need to compromise. PB is a good example of this.

This also seems to be true in the medical device industry.  Within our product architectures we build components to best meet our customer requirements without regard for the outside world. Interfacing with others (interoperability) is generally a completely separate task, if not a product unto itself.

Interoperability is about creating standards which means having to compromise and make trade-offs.  It would be nice if Healthcare interoperability could be just a technical discussion like the PB vs. XML debate. This would allow better integration of standards directly into products so that there would be less of the current split-personality (internal vs. external  needs) development mentality.

Another thing I noticed about the PB announcement -- how quickly it was held up to XML as a competing standard. With Google's clout, simply giving it away creates a de facto standard. Within the medical connectivity world though, there is no Google.

I've talked about this before, but I'm going to say it again anyway. From my medical device perspective, with so many confusing standards and competing implementations the decision on what to use ends up not being based on technical issues at all. It's all about picking the right N partners for your market of interest, which translates into N (or more) interface implementations. This isn't just wasteful, it's also wrong. Unfortunately, I don't see a solution to this situation coming in the near future.

More Software Forensics and Why Analogies Suck

Tuesday, July 1st, 2008

There's a recent article in the Baltimore Sun called Flaws in medical coding can kill which just rehashes static software  analysis (hat tip: FDA Trying to Crack Down on Software Errors).

I've discussed software forensics tools before. Yes, bad software has hurt and killed people, and there's no excuse for it.  I still don't think an expensive automated software tool is the silver bullet (which is implied by the article) for solving these problems.

But here's what really bugs me:

"If architects worked this way, they'd only be able to find flaws by building a building and then watching it fall down"

This is a prime example of why analogies suck.  The quote is supposed to somehow bolster the FDA's adoption of "new forensic technology". If you stop and think about it, it does just the opposite.

I guess you first have to consider the source --  a VP of Engineering for a forensic software vendor. This is exactly what a you'd expect to hear in a sales pitch.

What's truly ironic though is that a static analysis tool can only be used on source code! Think about it. Source code is the finished product of the software design and development process. Also, forensic science, by definition is the presentation of something that has already happened. It can only be done after the fact.

The logical conclusion you would draw from the analogy is that static analysis is probably useless because the building is already up!  If you step back and look at the full software quality process, this may well be true.

I'm not saying that static analysis tools don't have value. Like all of the other software tools we use, they have their place.

Just beware when you try to use an analogy to make a point.

UPDATE (7/5/08):

Here's another take on medical device bugs: When bugs really do matter: 22 years after the Therac 25.

UPDATE (7/16/08):
From Be Prepared: Software Forensics Gaining Steam at FDA, David Vogel of ­Intertech Engineering Associates says:

... that static tools are hyped to do more than they can actually deliver. “Static analysis looks for simple coding errors and does not apply heuristics to understand how it will perform dynamically because it is a static analysis tool”

I agree.

UPDATE (7/26/08):

Another reference : Are hospitals really safe?

UPDATE (9/16/08):

A couple more related articles:

Applying Static Analysis To Medical Device Software

Using static analysis to diagnose & prevent failures in safety-critical device designs

UPDATE (9/27/08):

Architecting Buildings and Software: Software architects are an important component in the creation of quality software and need to continue to refine and improve their role in the development process.  No matter how you try to bend and twist it though, the building analogy will always be problematic -- so why bother? Maybe that "intuitive understanding" of the construction industry just distracts us from being innovative about what's required to build software.

UPDATE (12/1/08): If Jeff wasn't a programmer he'd be a farmer: Tending Your Software Garden