Archive for the ‘Programming’ Category

The Desperate Need for Simplicity

Tuesday, October 13th, 2009

Ted Neward's article "Agile is treating the symptoms, not the disease" touches on several important points about the software industry.

  • Modern software development tools and technologies require a significant learning curve.
  • Development methodologies (like Agile) exist for managing complexity, but do not reduce the load of these technologies.
  • In the last decade there has been no "Next Big Thing", like Access was in the 90s.

What's most interesting to me is:

We are in desperate need of simplicity in this industry. Whoever gets that, and gets it right, defines the "Next Big Thing".

What's true in the broader software world is also generally true in Healthcare IT.  In HIT there has never been an Access equivalent, just a lot of pieces and parts trying unsuccessfully to work together.

The need was touched on in Liberate the Data!.  Simplicity is desperately needed in order to create the "First Big Thing" for HIT interoperability.

UPDATE (10/14/09):  More commentary:

A .NET Application that Never Dies

Saturday, September 12th, 2009

29392-Live_forever_-sfullJeremy's Graceful Shutdown Braindump should really include another use case. How do you create a .NET application that never shuts down? Ever!

This  is a common scenario for closed systems that only allow the user to interact with a predefined set of applications.  In other words, the user is never able to utilize any of the operating system functionality. In particular, they can not install new applications or update any software components.

This situation is related to the issues discussed in Medical Device Software on Shared Computers. Creating a closed Windows-based system is not an easy task. For our XP Embedded system here are some of the considerations:

  1. Prevent booting from a peripheral device (CD-ROM, USB stick, etc.)
  2. Prevent access to the BIOS so that #1 is enforced.
  3. Prevent plug-n-play devices from auto-starting installers.
  4. You can not run Explorer as the start-up shell -- no desktop or start menu.
  5. Prevent Ctrl-Alt-Del from activating task manager options.
  6. Disable the Alt-Tab selection window so the user can not switch application focus.
  7. Ensure that the primary user interface application is always running.
  8. All UI components must exit without user interaction when the system is powered down.

One of the challenges for .NET applications is how to handle unexpected exceptions. What you need first is a way to catch all exceptions.  OK, so now you know your program is in serious distress. You may be able to recover some work (a la a "graceful shutdown"), but after that it's not a good idea keep the application running.

That means you have to restart the program. For a WinForm application one option is:

Application.Restart() essentially calls Application.Exit() which tries to gracefully shutdown all UI threads.   The problem with that is the application may appear to be hung if you have background worker threads that are monitoring hardware devices that are not currently responding.

Another issue is when the .NET application is doing interop with COM components.  I've seen situations where all of the managed threads appear to exit properly via Application.Exit() but an un-managed exception (and error window) still occur.  This behavior is unacceptable.

The way to ensure that the application restarts properly (simplified):

The Environment.Exit() call is harsh, but it is the only way I know of that guarantees that the application really exits.  If you want a Windows Application event log and a dump of your application you can use Environment.FailFast() instead.

UPDATE (9/19/09): I ran across a post about COM object memory allocation in mixed managed/unmanaged environments: Getting IUnknown from __ComObject. As this article exemplifies, debugging COM objects under these circumstances is a real pain in the butt.  We used strongly-typed managed wrappers for our COM objects. Besides a .NET  memory profiler we just monitored overall allocations externally with Process Explorer. It may be undocumented and fragile, but at least it's good to know that there is way to dig deeper if you need to.

2009 Ultimate Developer and Power Users Tool List for Windows

Wednesday, September 2nd, 2009

This is a great list. I linked to it in 2007, but somehow forget in 2008. Anyway, there's probably at least one tool you may not have seen before that would be worth trying out.

Scott Hanselman's 2009 Ultimate Developer and Power Users Tool List for Windows

Oh, and read through the comments. Everyone has their own favorites, and opinions.

Plunging into Web Development

Sunday, June 7th, 2009

ConanI've authored a few web sites. Nothing professionally though. I know just enough HTML, CSS, and JavaScript to be dangerous.

Now I'm faced with creating a customer-facing site that has (or will someday soon have) real requirements.

Here are a couple of the requirements I know so far:

  1. Relatively low volume traffic. The site will be public, but only registered users (customers) will have access.  No product pages, no shopping carts, no ads, no social networking. The front page is a login screen.
  2. Reliable and secure transport and storage of medical data.  At a minimum we must comply with HIPAA standards (privacy rules).

I don't see web site development as really that different from building any other type of application. It's all software. The architectural building blocks may be different, but the developer's mind-set and methodologies for producing a quality product need to  be the same.

I haven't gotten far enough along to really understand all of the deployment and maintenance issues. I'm thinking about them though. The same goes for testing. I can foresee development vs. production platform testing issues that will have to be carefully considered.

What I want to do is walk you through my rational for the selection of some of the major components and tools I'm considering using for this project.

Web Frameworks

Here's a little historical perspective on selecting a web development framework:

choosingwebframework

Yep, that's how it feels.  There are at least 100 options (plus a couple of my additions):

AgaviAIDA/Web | Ajile | Akelos | Apache ClickApache CocoonApache StrutsApache WicketAppFuseAraneaASP.NET MVC | Axiom Stack | BFCCakePHPCampingCatalystCherryPyCodeIgniterColdSpringCSLACppCMSDjangoDotNetNukeDrupal | ErlyWeb | eZ ComponentsFlex | FUSE | FuseboxGoogle Web ToolkitGrokGrailsHamletsHordeInterchangeItsNatIT Mill ToolkitJavaServer Faces | Jaxer | JBoss SeamKepler | Kohana | Lift | LISA | ManyDesigns PortofinoMasonMaypoleMach-IIMerbMidgardModel-GlueMonoRailMorfikNitroonTapOpenACSOpenLaszloOpenXava | Orbit | PEAR | Orinoco | PyjamasPylonsQcodoRadicoreReasonable Server FacesRIFERuby on RailsSeasideShale | Simplicity | SilverStripe (Sapphire)SmartClientSofiaSPIPSpringStripesSymfonyTapestryThinWire | Tigermouse | VaadinTurboGearsWavemakerweb2pyWebObjectsWebWork | Wigbi | YiiZendZK | Zoop | Zope 2Zope 3ztemplates

YIKES!!

As a .NET developer, my first inclination was to look at ASP.NET MVC. The two most popular and active open source frameworks are  Ruby on Rails (RoR) and Django (Python-based). To be honest, I have not spent a lot of time investigating any of the others.

Why is it that I often find myself in this situation? It's usually not 100, but there always seems to be multiple well developed solutions for these types of problems.  I ran into the same thing a couple of years ago when I was selecting an ORM for a .NET project.

All you can do is start by taking the advice of others ("most popular") and give one or two a try.  Not only will you get a good sense of how well the framework meets your project requirements, since there will inevitably be problems or questions you'll also be able to evaluate documentation and community activity.

It's like making pasta -- you throw a noodle against the wall and if it sticks, you're done cooking.  Well, not really... but you know what I mean.

Hosting

One of the major considerations is hosting. I've previously explored the three major cloud computing platforms.

  • Amazon EC2 would be overkill (see requirement #1). I don't see a need for significant scale-up in the foreseeable future. Running a small on-demand EC2 instance 24/7 is more expensive (~$70/month) than just buying hosted services.  Also, supporting a complete OS platform is unnecessary work.
  • Microsoft Azure is currently in CTP (Community Technology Preview) and it's still unclear what the pricing will be.
  • That leaves Google App Engine.  Based on the GAE Quotas, we would be able to operate under the limits for quite a while (exceeding the quotas would be a good thing).  That means GAE can provide us free hosting, which is hard to beat.

There are literally 100's of hosting options, and most would meet our bandwidth and storage requirements at a nominal cost.  Independent of storage (see below) I guess I'm biased towards a cloud solution for two reasons:

  1. "Good Enough" isn't Good Enough: I've been hosting this domain on a commercial site for about 6 years.  I'd classify my host as good enough for my personal use (family site, photo gallery, this blog, etc.).  If my hosting service went away tomorrow, no big deal. I backup everything regularly and could be up and running on a comparable host pretty quickly. But for business purposes that involve critical customer medical data, "good enough" and the possibility of the host disappearing just doesn't cut it.
  2. Large Infrastructure: This is what makes a cloud solution so attractive. With any of the three cloud options you are buying into reliability and stability. They already have multiple data centers, security, and disaster plans in place.  You don't have to worry about Amazon, Microsoft, or Google going away any time soon. Unless you have the resources to build it yourself, IMO using a cloud service is a good business decision.

So for now I'll be using Google App Engine.

Data Storage

Now lets looks at requirement #2: reliable and secure data storage. At this time the best solution seems to be Amazon S3. Amazon has already put a lot of thought into this:  Creating HIPAA-Compliant Medical Data Applications with Amazon Web Services (warning: PDF).  S3 transfer and storage costs are very reasonable.  Paying only for what you use is a real benefit.

Both Google and Microsoft are very active in the Healthcare sector (Google Health and HealthVault) and I'm sure will soon have cloud storage offerings with similar features.

There are a number of web hosting sites that claim HIPAA data storage compliance, but most seem to just be using "HIPAA" as a marketing tool to attract medically related clients. I'd stay away from these.

Web Frameworks (part 2)

Deciding to use GAE quickly narrows the web framework choice down. GAE supports Python (w/ Django) and the Java 6 runtime environment. I do not believe that either ASP.NET or RoR are supported on GAE. Done deal -- Django.

I know what you're thinking.  There are many other Python-based web frameworks and even Java alternatives that I should be considering. That's true, but Django is arguably the most popular and has a very active developers community. Also, there are several Google Code App Engine projects (see below) that support Django integration.

I did play around with RoR . The Ruby language itself is great. I love having five different ways to do the same thing. The RoR web framework is mature and has many of the same features as Django.

I looked at ASP.NET MVC, but only from a distance. Here's a concise take from someone that recently jumped in: ASP.NET MVC Impressions after 1 week.

Development Environment

I initially setup a Windows-based Python/Django/GAE-SDK development environment but found it to be too clumsy.  I've settled into Ubuntu 9.04 running in a VirtualBox VM.

The Ubuntu Package Manager handled installation of all the necessary prerequisite components. Now that I think of it, I didn't have to do a single ./configure and make. That's progress!

I'm an old Unix hack and I quickly fell back into my first love : Emacs. After the nostalgia wore off, I needed to find a real development IDE.  There were two choices:

  1. Eclipse:  I tried using the PyDev plug-in along with some Django integration instructions I found. Google also provides some Eclipse integration, but being able to start the server and other functions from the IDE was not that important to me.  I'd rather use the command line. Also, Eclipse just seems like a real dog.
  2. Netbeans:  With the Python plug-in Netbeans works fine, so I'll stick with it until something better comes along.

Django (Front-end)

The four features that make  Django attractive:

  • Object-relational mapper: Define your data models entirely in Python. You get a rich, dynamic database-access API for free — but you can still write SQL if needed.
  • Automatic admin interface: Save yourself the tedious work of creating interfaces for people to add and update content. Django does that automatically, and it's production-ready.
  • Elegant URL design: Design pretty, cruft-free URLs with no framework-specific limitations. Be as flexible as you like.
  • Template system: Use Django's powerful, extensible and designer-friendly template language to separate design, content and Python code.

Carefully walk through the four part Django tutorial. Beware: there are three versions of the tutorial (0.96, 1.0, and "Latest"). Make sure you're using the desired one.

For Django integration with GAE I'm using app-engine-patch.  I had first tried Google App Engine Helper for Django, but I found that app-engine-patch works much better.

Data Integration (Back-end)

Getting data to and from the S3 server will be a critical component.  I have only started looking into this, but the Amazon documentation seems very good.  The Getting Started Guide examples are presented in multiple languages (PHP, C#, Java, Perl, Ruby, Python).  A Python interface to Amazon Web Services, Boto, also looks like it might be useful.

Amazon S3 POST is an efficient way to move data to S3:

S3 Post

The back-end will require much more investigation.

For the additional database needs (account management, logging, auditing, etc.) I'll just use the GAE Datastore.

Overwhelmed

There's a lot of "stuff" here. Investigating and evaluating it all plus making decisions is a daunting process.

The purpose of going through these selections is to reduce the number of variables so I could start concentrating on an architecture and design that will meet project requirements. There are still many unknowns though, and I'm sure there will be major bumps in the road that will cause me to change direction.

UPDATE (11/21/2010): Beware -- you get what you pay for!: Goodbye Google App Engine (GAE)

Guest Article: Static Analysis in Medical Device Software (Part 2) — Methodology

Thursday, June 4th, 2009

Pascal Cuoq at Frama-C continues his discussion of static analysis for medical device software. This is part 2 of 3. Part 1 is here.

In the second part of this article I write about methodology, where tools and engineering come together to produce software that you can entrust with lives. I do not avoid talking about the work my colleagues and I do, but I do mention the work of others too.

The layman often assumes that it must be impossible to make software that works as intended. It is a natural conclusion to draw from one's experience with personal computers, mobile phones, car on-board computers and vending machines. The layman's opinion is biased because for most people, embedded software is the means, rather than an end, and therefore is never noticed when it works. For instance, my own digital reflex camera contains a fair amount of software. Still, I have never observed it to deviate from the behavior described in the thick manual that came with it -- there are some particularities that I would call functional bugs, but since the manual describes them at length, as the old joke goes, they are features. Software that works is not impossible. It is only that, as the regretted Douglas Adams would put it, software that doesn't work is slightly cheaper. Moderately large software systems that work well enough not to be noticed can be produced. It is "only" a matter of having simple rules that enforce readability of the developed code by people who have not written it, and an appropriately sized budget for code reviews and quality assurance (usually testing, but bug-finding software analyzers are used here too, and they would be used more if their strengths were not so widely misunderstood). This statement does not include very large codebases and concurrent systems, that we still aren't very good at building reliably but keep trying anyway.

The specification for my digital camera is the thick manual, although there are also internal specifications for sub-components of the camera's software that I, as an end user, do not get to see. The internal specifications naturally tend to be more technically detailed as they deal with smaller and smaller sub-components. As components are assembled, it becomes possible to check that the corresponding specification for the sub-assembly is satisfied. This method is called the V-Model of software development, although one wonders why it needs such a high-sounding name: almost every manufactured physical object has been built from sub-components with pre-determined specifications since time immemorial.

This has nothing to do with the production of critical code. Or rather, the two components above, development according to an enforced development standard, and quality assurance (debugging), remain but become a small part of the picture in the development of critical code. Two additional components, at least as large as the first two, are the certification and the authority.

Certification is the additional, reflexive examination of the development, verification (i.e. the software conforms to specification) and validation (i.e. the specification corresponds to the actual need) processes.

One difference between software and hardware is that it is harder to make sure that software satisfies the original requirements. This was made very clear in the article that prompted this series of blog posts. And this is why critical software particularly needs certification. Certification is not so much the testing of the software against the specification (this is called "debugging" and it's not specific to critical software) but a cohesive list of arguments leading to the conclusion that whatever testing has been done was sufficient to find any possible flaw with the expected confidence. A certification file does not state "we used this development tool and we ran these 1000 tests for this component" but "we used this development tool, and here are the reasons why we think it's acceptable. Here are the reasons why we think that these 1000 tests are sufficient to ensure that this component works as expected (and, incidentally, here are the tests and their results)". As you would expect, when a static analyzer is used, the certification file does not read "Here is the tool we used and the results we obtained" but "Here is the tool we used. Here is how we established that this tool could reliably be used to ensure this aspect of the requirements, (and incidentally, here are the results we obtained)".

The authority defines the expectations for the certification, and studies the certification file once submitted. In the end, it all comes down to convincing the competent, financially disinterested humans who check the certification file that all the necessary steps have been taken to ensure the safety of the critical device.

We now arrive to the first statement I disagree with from the article, that in static analysis of software, "achieving a 100% recall rate is rare, if not impossible, and may only be possible at the cost of a very high number of false positives"

First, a 100% recall rate corresponds to the absence of false negatives, which is a perfectly achievable objective. Static analyzers with this property are called "correct" (or "sound"). These adjectives have meaning only in a context where it is clear what bugs are being looked for and what assumptions are made to this end. Assuming this context is unambiguous, they mean that as long as the tool's assumptions are respected, no bug in the analyzed program is left undetected.

Two examples of commercially available static analyzers that have been designed from the ground up to have no false negatives are PolySpace, now distributed by The MathWorks, and Astrée, soon to be distributed by AbsInt. Allow me, however, to translate the sentence "Astrée is capable of producing exactly zero false alarms" from that web page: "false alarms" mean "false positives". Astrée, by design, does not have any false negatives. If it failed to notice a possible run-time error, it would be a bug which, I am sure, would be promptly fixed. The "no false positives" claim only means that it does not have any false positives on some pre-determined representative pieces of software. It is certainly not a guarantee, since, as stated earlier, it is a mathematical impossibility for a static analyzer to reach a verdict for any analyzed program with neither false positives nor false negatives. The best way to determine the number of false positives you can expect Astrée to produce for your code is, as with any other analyzer, to try it.

Now, except in the magical world of marketing, it is indeed true that the less false negatives are allowed in the results, the more false positives can be expected to be found. This dilemma is the same that occurs every time something can only imperfectly be detected. Considering the target readership for the blog of my kind host, I do not think that I need to harp on this. But, if the medical test analogy does not work for you, consider the example of the shoestring eyelets on my shoes, which cause the metal detectors in international airports to ring almost every time (false positive) because it has become unacceptable in the last few years to have the slightest risk of a weapon going undetected through the controls (false negative).

Every system has its assumptions: in the case of the airport detector, one is that a weapon is assumed to include some metal. This is a good opportunity to introduce in passing another distinction: "safety" works against the physical world (failures, birds flying into reactors, ...). "Security" works against conscious opponents who are actively trying to use your assumptions to their advantage. This distinction can be applied to software analysis but it is more general than that. Still, even if what you are doing is categorized as "safety", if it's critical, you have to be aware of your assumptions. So the two disciplines are not always very different in philosophy, although they often aim at different objectives.

Thanks to a number of recent advances on the theoretical side, as well as the increase in the computational power available in the workstations where the analysis takes place, you can expect the number of false positives given by a correct static analyzer on your embedded code to be contained. It would be cautious to disbelieve claims that there won't be any.

In addition to the above two static analyzers, I can mention Caveat, another static analyzer without false negatives that has been developed in the laboratory where I work. Caveat is commercially available, although we do not advertise it because it is targeted to very high criticity software that does not concern many (we consider it to be most useful for code with a criticity comparable to level A, the highest in the DO-178B avionics certification standard). Since I am in a mood to take single sentences from web pages and comment on them, please allow me to do it once more: the sentence "[Using Caveat, Airbus France's] goal is to detect errors as soon as possible in the development cycle, and not to prove the software" was written at a time when Airbus France was indeed experimenting with Caveat as a R&D project. This sentence is now completely obsolete. Caveat has been officially used for part of the verification of part of the software of the Airbus A380 — that is, precisely, to establish beyond doubt certain properties about the analysed source code, and in substitution to the unit tests whose role would have been to establish these properties in a more traditional process. As the DO-178B standard mandates, Caveat has been qualified by Airbus as a verification tool to be used for the certification of this particular software.

Also from this laboratory, there is Frama-C, which is available too since it's Open Source. Frama-C is a research prototype to which the experimentation of new ideas has shifted (while Caveat is still being maintained for Airbus and any industrial user who requires it). Frama-C is more of a framework for static analyzers than a static analyzer per se. The analyzers that have been developed in Frama-C so far rely on various techniques but they are all without false negatives. Some of these analyzers are now reliable enough to be considered for R&D experimentation. Caveat was a research prototype too at the time Airbus decided to use it in production and to make it part of its certification process. Whether or not the tool you intend to use comes in a cardboard box, you will have to explain the measures you took to ensure that it was the right tool to use for what you were using it for. What it is called matters less than the measures you took.

The second statement from the article I disagree with is that "static analysis is intended to supplement and improve the effectiveness of existing best practices in testing. It should not be thought of as a substitute for device developers' current testing activities". Of course, if you are using a bug-finding static analyzer with false negatives, you will have a hard time justifying why you removed a single test from those you would have done without the analyzer. Such a tool is most useful in the debugging phase, to identify and remove bugs as quickly as possible, not in the verification phase of a process subject to certification. But when Airbus used Caveat for the A380, it was precisely in substitution to existing unit tests. The fact that Caveat is designed not have false negatives was one of the arguments in the validation of Caveat as a verification tool to establish the properties that were previously guaranteed by these unit tests, with the required confidence.

Another way to look at this question is the following: bug-finding static analyzers (that have false negatives) have the potential to be better for debugging than sound analyzers (without false negatives) because by accepting to emit false negatives, they can reduce the number of false positives (and save the user time). This debugging phase can be, and often is, lightly covered in the certification because it is later followed by verification, which is the important second check. In a certification-covered verification process, the bugs have already been ironed out and the engineers are not trying to find more bugs but to prove that there aren't any. Any positive is going to be a false positive in this context, even if it comes from the most cautious heuristic tool (a tool that makes a lot of effort to warn only when it is quite certain that a problem exists). On the other hand, during the certified verification process, a heuristic tool's contribution to the bottom line is harder to quantify, since the objective of verification is not to find bugs but to establish that there aren't any.

The statement that there aren't any bugs left when certification starts may look like an exaggeration, but it isn't. If the certification requirements are stringent, changing any part of the code (to fix a bug) means starting the verification from scratch. This is a protection against, among other things, the dangers of C that were alluded to in the first part of this article. If you find bugs at that stage, you are not doing it optimally from the economic point of view (and you are starting afresh a heavy, certification-covered verification process in which, hopefully for you, you will not discover any new bug this time).

I would like to acknowledge the careful editing of my host, the suggestions of my colleague Virgile Prevosto in writing part 1, and the remarks of both my supervisor Benjamin Monate and David Delmas (Airbus France) concerning the present part 2 of this article. The third and last part of this series will be on the topic of formal functional specifications, one of the under-used new tools that have a contribution to make in the verification of critical software. In conclusion, here is a quoted statistic in the style, if not the spirit, of Douglas Coupland's Generation X:

Number of human lives whose loss has been attributed to software failure of a civil airplane: 0

Guest Article: Static Analysis in Medical Device Software (Part 1) — The Traps of C

Friday, May 15th, 2009

Any software controlled device that is attached to a human presents unique and potentially life threatening risks.  A recent article on the use of static analysis for medical device software prompted Pascal Cuoq at Frama-C to share his thoughts on the subject. This is part 1 of 3.

The article Diagnosing Medical Device Software Defects Using Static Analysis gives an interesting overview of the applicability of static analysis to embedded medical software. I have some experience in the field of formal methods (including static analysis of programs), and absolutely none at all in the medical domain. I can see how it would be desirable to treat software involved at any stage of a medical procedure as critical, and coincidentally, producing tools for managing critical software has been my main occupation for the last five years. This blog post constitute the first part of what I have to say on the subject, and I hope someone finds it useful.

As the article states, in the development of medical software, as in many other embedded applications, C and C++ are used predominantly, for better or for worse. The "worse" part is an extensive list of subtle and less subtle pitfalls that seem to lay in each of these two languages' corner.

The most obvious perils can be avoided by restricting the programmer to a safer subset of the language -- especially if it is possible to recognize syntactically when a program has been written entirely in the desired subset. MISRA C, for instance, defines a set of rules, most of them syntactic, that help avoid the obvious mistakes in C. But only a minority of C's pitfalls can be eliminated so easily. A good sign that coding style standards are no silver bullet is that there exist so many. Any fool can invent theirs, and some have. The returns of mandating more and more coding rules diminish rapidly, to the point that overdone recommendations found in the wild contradict each other, or in the worst case, contradict common sense.

Even written according to a reasonable development standard, a program may contain bugs susceptible to result in run-time errors. Worse, such a bug may, in some executions, fail to produce any noticeable change, and in other executions crash the software. This lack of reproducibility means that a test may fail to reveal the problem, even if the problematic input vector is used.

A C program may yet hide other dangerous behaviors. The ISO 9899:1999 C standard, the bible for C compilers implementers and C analyzers implementers alike, distinguishes "undefined", "unspecified", and "implementation-defined" behaviors. Undefined behaviors correspond roughly to the run-time errors mentioned above. The program may do anything if one of these occurs, because it is not defined by the standard what it should do. A single undefined construct may cause the rest of the program to behave erratically in apparently unrelated ways. Proverbially, a standard-compliant compiler may generate a program that causes the device to catch fire when a division by zero happens.

Implementation-defined behaviors represent choices that are not imposed by the standard, but that have to be made by the compiler once and for all. In embedded software, it is not a viable goal to avoid implementation-defined constructions: the software needs to use them to interface with the hardware. Additionally, size and speed constraints for embedded code often force the developer to use implementation-defined constructs even where standard constructs exist to do the same thing.

However, in the development of critical systems, the underlying architecture and compiler are known before software development starts. Some static analysis techniques lend themselves well to this kind of parameterization, and many available tools that provide advanced static analysis can be configured for the commonly available embedded processors and compilers. Provided that the tests are made with the same compiler and hardware as the final device, the existence of implementation-defined behaviors does not invalidate testing as a quality assurance method, either.

Unspecified behaviors are not treated as seriously as they should by many static analysis tools. That's because unlike undefined behaviors, they cannot set the device on fire. Still, they can cause different results from one compilation to the other, from one execution to the other, or even, when they occur inside a loop, from one iteration to the other. Like the trickiest of run-time errors, they lessen the value of tests because they are not guaranteed to be reproducible.

The "uninitialized variable" example in the list of undesirable behaviors in the article is in fact an example of unspecified behavior. In the following program, the local variable L has a value, it is only unknown which one.

Computing (L-L) in this example reliably give a result of zero.

Note: For the sake of brevity, people who work in static analysis have a tendency to reduce their examples to the shortest piece of code that exhibits the problem. In fact, in writing this blog post I realized I could write an entire other blog post on the deformation of language in practitioners of static analysis. Coming back to the subject at hand, of course, no programmer wants to compute zero by subtracting an uninitialized variable from itself. But a cryptographic random generator might for instance initialize its seed variable by mixing external random data with the uninitialized value, getting at least as much entropy as provided by the external source but perhaps more. The (L-L) example should be considered as representing this example and all other useful uses of uninitialized values.

Knowledge of the compilation process and lower-level considerations may be necessary in order to reliably predict what happens when uninitialized variables are used. If the local variable L was declared of type float, the actual bit sequence found in it at run-time could happen to represent IEEE 754's NaN or one of the infinities, in which case the result of (L-L) would be NaN.

Uninitialized variables, and more generally unspecified behaviors, are indeed less harmful than undefined behaviors. Some "good" uses for them are encountered from time to time. We argue that critical software should not exhibit any unspecified behavior at all. Uses of uninitialized variables can be excluded by a simple syntactic rule "all local variables should be initialized at declaration", or, if material constraints on the embedded code mean this price is too high to pay, with one of the numerous static analyzers that reliably detect any use of an uninitialized variable. Note that because of C's predominant use of pointers, it may be harder than it superficially appears to determine if a variable is actually used before being initialized or not; and this is even in ordinary programs.

There are other examples of unspecified behaviors not listed in the article, such as the comparison of addresses that are not inside a same aggregate object, or the comparison of an invalid address to NULL. I am in fact still omitting details here. See the carefully worded §6.5.8 in the standard for the actual conditions.

An example of the latter unspecified behavior is (p == NULL) where p contains an invalid address computed as t+12345678 (t being a char array with only 10000000 cells). This comparison may produce 1 when t happens to have been located at a specific address by the compiler, typically UINT_MAX-12345677. It also produces 0 in all other cases. If there is an erroneous behavior that manifests itself only when this condition produces 1, a battery of tests is very unlikely to uncover the bug, which may remain hidden until after the device has been widely deployed.

An example of comparison of addresses that are not in the same aggregate object is the comparison (p <= q), when p and q are pointers to memory blocks that have both been obtained by separate calls to the allocation function malloc. Again, the result of the comparison depends on uncontrolled factors. Assume such a condition made its way by accident in a critical function. The function may have been unit-tested exhaustively, but the unit tests may not have taken into account the previous sequence of bloc allocations and deallocations that results in one block being positioned before or after the other in the heap. A typical static analysis tool is smarter, and may consider both possibilities for the result of the condition, but we argue that in critical software, the fact that the result is unspecified should in itself be reported as an error.

Another failure mode for programs written in C or any other algorithmic language is the infinite loop. In embedded software, one is usually interested in an even stronger property than the absence of infinite loops, the verification of a predetermined bound on the execution time of a task. Detection of infinite loops is a famous example of undecidable problem. Undecidable problems are problems for which it is mathematically impossible to provide an algorithm that for any input (here, a program to analyze) eventually answers "yes" or "no". People moderately familiar with undecidability sometimes assume this means it is impossible to make a static analyzer that provides useful information on the termination of an analyzed program, but the theoretical limitation can be worked around by accepting a little imprecision (false negatives, or false positives, or both, in the diagnostic), or by allowing that the analyzer itself will, in some cases, not terminate.

The same people who recognize termination of the analyzed program as an undecidable property for which theory states that a perfect analyzer cannot be made, usually fail to recognize that finely recognizing run-time errors or unspecified behaviors are undecidable problems as well. For these questions, it is also mathematically impossible to build an analyzer that always terminates and emits neither false positives nor false negatives.

Computing the worse-case execution time is at least as hard as verifying termination, therefore it's undecidable too. That's for theory. In practice, there exist useful static analyzers that provide guaranteed worse case execution times for the execution of a piece of software. They achieve this by limiting the scope of the analysis, firstly, to the style of code that is common in embedded software, and secondly, to the one sub-task whose timing is important. This kind of analysis cannot be achieved using the source code alone. The existing analyzers all use the binary code of the task at some point, possibly in addition to the source code, a sample of the processor to be used in the device, or only an abstract description of the processor.

This was part one of the article, where I tried to provide a list of issues to look for in embedded software. In part two, I plan to talk about methodology. In part three, I will introduce formal specifications, and show what they can contribute to the issue of software verification.

Continuous Learning: 14 Ways to Stay at the Top of Your Profession

Saturday, May 9th, 2009

"Professional development refers to skills and knowledge attained for both personal development and career advancement. " I'm fortunate in that my personal and career interests are well aligned. I must enjoy my work because I do a lot of the same activities with a majority of my free time (just ask my wife!).

Keeping up with an industry's current technologies and trends is a daunting task.  Karl Seguin's post Part of your job should be to learn got me to thinking about the things I do to stay on top of my interests.  I never really thought about it much before, but as I started making a list I was surprised by how fast it grew.  When it reached a critical mass that I thought it would be worth sharing.

I actually have two professions. I'm a Biomedical Engineer (formal training) and a Software Engineer (self proclaimed).  I primarily do software design and development, but being in the medical device industry also requires that I keep abreast of regulatory happenings (the FDA in particular, HIPAA, etc.), quality system issues,  and industry standards (e.g. HL7).

Keeping track of Healthcare IT trends is also a big task. With the new emphasis by the federal government on EMR adoption, even a small company like mine has started planning and investing in the future demand for medical device integration.

The other major topic of interest to me is software design and development methodologies. A lot of the good work in this area seems to come from people that are involved in building enterprise class systems. I've discussed the ALT.NET community (here) and still think they are worth following.

So here's my list.  I talk about them with respect to my interests (mostly software technologies), but I think they are generally applicable to any profession.

1. Skunk Works

Getting permission from your manager to investigate new technologies that could potentially be used by your company is win-win. In particular, if you can parlay your new-found skills into a product that makes money (for the company, of course), then it's WIN-WIN.

In case you've never heard this phrase:  Skunk works.

2. Personal Projects

I always seem to be working with a new software development tool or trying to learn a new programming language. Even if you don't become an expert at them, I think hands-on exposure to other technologies and techniques is invaluable. It gives you new perspectives on the things that you are an expert in.

Besides getting involved in an open source project, people have many interesting hobby projects.  See Do you have a hobby development project? for some examples.

3. Reading Blogs

I currently follow about 40 feeds on a variety of topics. I try to remove 2-3  feeds and replace them with new ones at least once a month. Here is my Google Reader trend for the last 30 days:

30 day RSS trendYou can see I'm pretty consistent. That's 1605 posts in 30 days, or about 53 posts per day. To some, this may seem like a lot. To others, I'm a wimp.  During the week I usually read them over lunch or in the evening.

4. Google Alerts

Google Alerts is a good way to keep track of topics and companies of interest. You get e-mail updates with news and blog entries that match any search term. For general search terms use 'once a day' and for companies use 'as-it-happens'.

5. Social Networks

I joined Twitter over a month ago.  The 30 or so people I follow seem to have the same interests as I do. What's more important is that they point me to topics and reference sites that I would not have discovered otherwise. I've dropped a few people that were overly verbose or had mostly inane (like  "I'm going to walk the dog now.") tweets.

I'm also a member of LinkedIn. Besides connecting with people you know there are numerous groups you can join and track topical discussions. Unfortunately, there are quite a few recruiters on LinkedIn which somewhat diminishes the experience for me.

I don't have a Facebook account because my kids told me you have to be under 30 to join. Is that true? 🙂

6. Books

I browse the computer section of the bookstore on a regular basis.  I even buy a technical book every now and then.

Downloading free Kindle e-books is another good source (and free, of course) e.g. here are a couple though Karl's post: Foundations of Programming. There's a lot of on-line technical reading material around. Having a variety on the Kindle allows me to read them whenever the mood strikes me.  One caution though: the Amazon conversion from PDF and HTML to e-book format is usually not very good. This is particularly true for images and code. But still, it's free -- you get what you pay for.

7. Magazines

There are numerous technical print publications around, but they are becoming rare because of the ease of on-line alternatives.  I used to get Dr. Dobbs journal but they no longer publish a print version, but it is still available electronically though.

I miss that great feeling of cracking open a fresh nerd magazine.  I still remember the pre-Internet days when I had stacks of BYTE laying around the house.

8. Webinars

These tend to be company sponsored, but the content about a product or service that you may not know a lot about is a good way to learn a new subject.  You just have to filter out the sales pitch. You typically get an e-mail invitation for these directly from a vendor.

9. Local User Groups

I've talked about this before (at the end of the post).  In addition to software SIGs, look into other groups as well. For me, IEEE has a number of interesting lectures in the area.

Face to face networking with like professionals is very important for career development ("It's not what you know -- it's who you know" may be a cliche, but it’s true.).  Go and participate as much as possible.

If there's not a user group in your area that covers your interests, then start your own! For example: Starting a User Group, Entry #1 (first entry of 4).

10. Conferences and Seminars

Press your employer for travel and expenses, and go when you can. This is another win-win for both of you.  Like Webinars, vendor sponsored one day or half day seminars can be valuable.  Also, as in #9, this is another opportunity to network.

Just getting out of the office every now and then is a good thing.

11. Podcasts

These may be good for some people, but I rarely listen to podcasts.  My experience is that the signal to noise ratio is very low (well below 1). You have to listen to nonsense for long periods of time before you get anything worthwhile. But that's just me. Maybe I don't listen to the right ones?

12. Discussion Sites

CodeProject and Stack Overflow are my favorites. Also, if you do a search at Google Groups you can find people talking about every conceivable subject.

Asking good questions and providing your expertise for answers is a great way to show your professionalism.

13. Blogging

IMO your single most important professional skill is writing. Having a blog that you consistently update with material that interests you is a great way to improve your writing skills.  It forces you to organize your thoughts and attempt to make them comprehensible (and interesting) to others.

14. Take a Class

If you have a University or College nearby, they probably have an Extension system that provide classes.  Also, there are free on-line courses available. e.g.: Stanford, MIT, and U. of Wash.

UPDATE (6/23/09): Here's some more fuel for #13: The benefits of technical blogging. All good points.

——
CodeProject Note:  This is not a technical article but I decided to add the 'CodeProject' tag anyway. I thought the content might be of general interest to CPians even though there's no code here.

Contradictory Observations and Electronic Medical Records

Tuesday, March 3rd, 2009

Martin Fowler has an interesting discussion in his ContradictoryObservations post.  This little slice of medically related software design insight is particularly relevant because it highlights (at least for me) the complexity of the use of electronic medical records and their interoperability.

In a broader sense I suppose it also shows some of the underlying difficulties that face the Obama administration's new EMR adoption push.  But I'm not going there.

The concepts of observationsrejection, and evidence are good, but they're just the tip of the iceberg:

rejected and evidence

Even after you've modeled the data interactions, how do you effectively communicate these concepts to the user?  Or to another EMR that doesn't know about your model or how it's used?

Martin's view is that:

Most of the time, of course, we don't use complicated schemes like this. We mostly program in a world that we assume is consistent.

Unfortunately, many of the issues facing electronic medical records do require complex solutions. And even when the world is consistent, how you implement a solution may be (actually, will probably be) very different than how I implement it.  Either way, interoperability will always be a challenge.

We're going to need lots of good software design tools to solve these problems.

Dreaming of Flexible, Simple, Sloppy, Tolerant in Healthcare IT

Saturday, January 3rd, 2009

I was recently browsing in the computer (nerd) section of the bookstore and ran across an old Joel Spolsky edited book: The Best Software Writing I.  Even though it's been about four years, good writing is good writing, and there is still a lot of insightful material there.

One of the pieces that struck a cord for me was Adam Bosworth's ISCOC04 Talk (fortunately posted on his Weblog).  He was promoting the use of simple user and programmer models (KISS -- simple and sloppy for him) over complex ones for Internet development.  I think his jeremiad is just as relevant to the current state of  EMR and interoperability.  Please read the whole thing, but for me the statement that get's to the point is this:

That software which is flexible, simple, sloppy, tolerant, and altogether forgiving of human foibles and weaknesses turns out to be actually the most steel cored, able to survive and grow while that software which is demanding, abstract, rich but systematized, turns out to collapse in on itself in a slow and grim implosion.

Why is it that when I read "demanding, abstract, rich but systematized" the first thing that comes to mind is HL7?  I'm not suggesting that some sort of open ad hoc system is the solution to The EMR-Medical Devices Mess.  But it's painfully obvious that what has been built so far closely resemble "great creaking rotten oak trees".

The challenge for the future of Healthcare interoperability is really no different than that of the Internet as a whole (emphasis mine):

It is in the content and the software's ability to find and filter content and in the software's ability to enable people to collaborate and communicate about content (and each other).

I would contend that the same is true for medical device interoperability. Rigid (and often times proprietary) systems are what keep devices from being able to communicate with one another.  IHE has created a process to try to bridge this gap, but the complexity of becoming a member, creating an IHE profile, and having it certified is a also a significant barrier.

Understanding how and why some software systems have grown and succeeded while others have failed may give us some insights. Flexible, Simple, Sloppy, Tolerant may be a dream, but it also might not be a bad place to start looking for future innovations.

Adam also had this vision while he was at Google: Thoughts on health care, continued (see the speech pdf):

... we have heard people say that it is too hard to build consistent standards and to define interoperable ways to move the information. It is not! ... When we all make this vision real for health care, suddenly everyone will figure out how to deliver the information about medicines and prescriptions, about labs, about EKGs and CAT scans, and about diagnoses in ways that are standard enough to work.

Also see the Bosworth AMIA May07 Speech (pdf) for how this vision evolved, at least for Google's PHR.

UPDATE (2/9/09): Here's a  related article: The Truth About Health IT Standards – There’s No Good Reason to Delay Data Liquidity and Information Sharing that furthers this vision:

We don’t have to wait for new standards to make data accessible—we can do a ton now without standards.  What we need more than anything else is for people to demand that their personal health data are separated from the software applications that are used to collect and store the data.

UPDATE (4/17/09): John Zaleski’s Medical Device Open Source Frameworks post is also related.

Use of an open-source framework approach is probably as good as any. As a management model, I don’t see it as being that much different from the way traditional standards have been developed. Open-source just provides a more ad-hoc method for building consensus. Less bureaucracy is a good thing though. It may also allow for the introduction and sharing of more innovative solutions. In any case, I like visions.

USB plug-n-play (plug-n-pray to some) may be a reasonable connectivity goal, but it does not deal at all with system interoperability. Sure, you can connect a device to one or more monolithic (and stable) operating systems, but what about the plethora of applications software and other devices?  This just emphasizes the need to get out of the “data port” (and even “device driver”) mind-set when envisioning communication requirements and solutions.

Programming Languages for the New Year

Friday, December 26th, 2008

As the New Year approaches a software developer's idea of renewal is typically learning a new programming language.  I ran across the following post which provides some interesting alternatives to consider.

10 programming languages worth checking out:

Also, Dr. Dobb's has an article on functional programming languages: It's Time to Get Good at Functional Programming