Guest Article: Static Analysis in Medical Device Software (Part 3) — Formal specifications

Pascal Cuoq at Frama-C continues his discussion of static analysis for medical device software. Also see Part 1 and Part 2.

In May 2009, I alluded to a three-part blog post on the general topic of static analysis in medical device software. The ideas I hope will emerge from this third and last part are:

  1. Formal specifications are good,
  2. Partial formal specifications are underrated, and
  3. One should never commit in advance to writing anything, however easy it seems it will be at the time.

Going back to point one, a "functional specification" is a description of what a system/subsystem does. I really mostly intend to talk about formal versions of functional specifications. This only means that the description is written in a machine-parsable language with an unambiguous grammar. The separation between formal and informal specifications is not always clear-cut, but this third blog entry will try to convince you of the advantages of specifications that can be handled mechanically.

Near the bottom of the V development cycle, "subsystem" often means software: a function, or a small tree of functions. A functional specification is a description of what a function (respectively, a tree of functions) does and does not (the time they take to execute, for instance, is usually not considered part of the functional specification, although whether they terminate at all can belong in it. It is only a matter of convention). The Wikipedia page on "Design by Contract" lists the following as making up a function contract, and while the term is loaded (it may evoke Eiffel or run-time assertion checking, which are not specifically the topic here), the three bullet points below are a good categorization of what functional specifications are about:

  • What does the function expect, what rules should the caller obey at the time of calling it?
  • What does the function guarantee, what is the caller allowed to expect from the function's results?
  • What properties does the function maintain?

I am tempted to point out that invariants maintained by a function can be encoded in terms of things that the function expects and things that the function guarantees, but this is exactly the kind of hair-splitting that I am resolved to give up on.

The English sentence "when called, this function may modify the global variables G and H, and no other" is almost unambiguous and rigorous — assuming that we leave aliasing out of the picture for the moment. Note that while technically something that the function ensures on return (it ensures that for any variable other than G or H, the value of the variable after the call is the same as its value before the call), this property can be thought of more intuitively as something that the function maintains.

The enthusiastic specifier may like the sentence "this function may modify the global variables G and H, and no other" so much that he may start copy-pasting the boilerplate part from one function to another. Why should he take the risk to introduce an ambiguity accidentally? Re-writing from memory may lead him to forget the "may" auxiliary, when he does not intend to guarantee that the function will overwrite G and H each time it is called. Like for contracts of a more legal nature, copy-pasting is the way to go. The boilerplate may also include jargon that make it impossible to understand by someone who is not from the field, or even from the company, whence the specifications originate. Ordinary words may be used with a precise domain-specific meaning. All reasons not to try to paraphrase and to reuse the specification template verbatim.

The hypothetical specifier may particularly appreciate that the specification above is not only carefully worded but also that a list of possibly modified globals is part of any wholesome function specification. He may — rightly, in my humble opinion — endeavor to use it for all the functions he has to specify near the end of the descending branch of the V cycle. This is when he is ripe for the introduction of a formal syntax for functional specifications. According to Wikipedia, Robert Recorde introduced the equal sign "to auoide the tediouſe repetition of [...] woordes", and the sentence above is a tedious repetition begging for a sign of its own to replace it. When the constructs of the formal language are well-chosen, the readability is improved, instead of being diminished.

A natural idea for to express the properties that make up a function contract is to use the same language as for the implementation. Being a programming language, it is suitably formal; the specifier, even if he is not the programmer, is presumably already familiar with it; and the compiler can transform these properties into executable code that checks that preconditions are properly assured by callers, and that the function does its own part by returning results that satisfy its postconditions. This choice can be recognized in run-time assertion checking, and in Test-Driven Development (in Test-Driven Development, unit tests and expected results are written before the function's implementation and are considered part of the specification).

Still, the choice of the programming language as the specification language has the disadvantages of its advantages: it is a programming language; its constructs are optimized for translation to executable code, with intent of describing algorithms. For instance, the "no global variable other than G and H is modified" idiom, as expressed in C, is a horrible way to specify a C function. Surely even the most rabid TDD proponent would not suggest it for a function that belongs in the same C file as a thousand global variable definitions.

A dedicated specification language has the freedom to offer exactly the constructs that make it pleasant to write specifications in it. This means directly including constructs for commonly recurring properties, but also providing the building blocks that make it possible to define new constructs for advanced specifications. So a good specification language has much in common with a programming language.

A dedicated specification language may for instance offer

as a synonym for the boilerplate function specification above, and while this syntax may seem wordy and redundant to the seat-of-the-pants programmer, I hope to have convinced you that in the context of structured development, it fares well in the comparison with the alternatives. Functional specifications give static analyzers that understand them something to chew on, instead of having to limit themselves to the absence of run-time errors. This especially applies to correct static analyzers as emphasized in part 2 of this oversize blog post.

Third parties that contact us often are focused on using static analysis tools to do things they weren't doing before. It is a natural expectation that a new tool will allow you to do something new, but a more accurate description of our activity is that we aim to allow to do the same specification and verification that people are already doing (for critical systems), better. In particular, people who discover tools such as Frama-C/Jessie or other analysis tools based on Hoare-Floyd precondition computations often think these tools are intended for verifying, and can only be useful for verifying, complete specifications.

A complete specification for a function is a specification where all the properties expected for the function's behavior have been expressed formally as a contract. In some cases, there is only one function (in the mathematical sense) that satisfies the complete specification. This does not prevent several implementations to exist for this unique mathematical function. More importantly, it is nice to be able to check that the C function being verified is one of them!

Complete specifications can be long and tedious to write. In the same way that a snippet of code can be shorter than the explanation of what it does and why it works, a complete specification can sometimes be longer than its implementation. And it is often pointed out that these specifications can be so large that once written, it would be too difficult to convince oneself that they do not themselves contain a bug.

But just because we are providing a language that would allow you to write complete specifications does not mean that you have to. It is perfectly fine to write minimal formal specifications accompanied with informal descriptions. To be useful, the tools we are proposing only need to do better than testing (the most widely used verification technique at this level). Informal specifications traditionally used as the basis for tests are not complete either. And there may be bugs in both the informal specification or in its translation into test cases.

If anything, the current informal specifications leave out more details and contain more bugs, because they are not machine-checked in any way. The static analyzer can help find bugs in a specification in the same way that a good compiler's sanity checks and warnings help avoid the stupidest bugs in a program.

In particular, because they are written in a dedicated specification language, formal specifications have better composition properties than, say, C functions. A bug in the specification of one function is usually impossible to overlook when trying to use said specification in the verification of the function's callers. Taking an example from the tutorial/library (authored by our colleagues at the applied research institute Fraunhofer FIRST) ACSL by example, the specification of the max_element function is quite long and a bug in this specification may be hard to detect. The function max_element finds the index of the maximum element in an array of integers. The formal version of this specification is complicated by the fact that it specifies that if there are several maximum elements, the function returns the first one.

Next in the document a function max_seq for returning the value of the maximum element in an array is defined. The implementation is straightforward:

The verification of max_seq builds on the specification for max_element. This provides additional confidence: the fact that max_seq was verified successfully makes a bug in the specification of max_element unlikely. Not only that, but if the (simpler, easier to trust) specification for max_seq were the intended high-level property to verify, it wouldn't matter that the low-level specification for max_element was not exactly what the specifier intended (say, if there was an error in the specification of the index returned in case of ties): the complete system still has no choice but to behave as intended in the high-level specification. Unlike a compiler that lets you put together functions with incompatible expectations, the proof system always ensures that the contract used at the call point is the same as the contract that the called function was proved to satisfy.

And this concludes what I have to say on the subject of software verification. The first two parts were rather specific to C, and would only apply to embedded code in medical devices. This last part is more generic — in fact, it is easier to adapt the idea of functional specifications for static verification to high-level languages such as C# or Java than to C. Microsoft is pushing for the adoption of its own proposal in this respect, Code Contracts. Tools are provided for the static verification of these contracts in the premium variants of Visual Studio 2010. And this is a good time to link to this video. Functional specifications are a high-level and versatile tool, and can help with the informational aspects of medical software as well as for the embedded side of things. I would like to thank again my host Robert Nadler, my colleague Virgile Prevosto and department supervisor Fabrice Derepas for their remarks, and twitter user rgrig for the video link.

Posted in Software Quality, Tools | Tagged , | Leave a comment

Why Healthcare IT is Not a Game Changer

Last week I attended the WLSA/Continua Mobile Healthcare Symposium and the opening day of the Continua Health Alliance Winter Summit 2010.  Also, a couple of weeks ago I attended a few of the FDA Workshop on Medical Device Interoperability: Achieving Safety and Effectiveness sessions via a Webcast*.

Since I'm not going to HIMSS in Atlanta this year (starts Mar. 1) I thought now would be a good time to do some venting.

I've talked about HIT problems before, e.g. Healthcare Un-Interoperability and The EMR-Medical Devices Mess. With all of the ARRA/HITECH talk along with the National Healthcare debate raging it made me wonder how the issues facing device interoperability, wireless Healthcare, and HIT in general really fit in to the bigger picture.

After sitting though multiple sessions on a wide variety of topics presented by smart people the obvious hit me in the face:  The complexity of the issues are mind numbing. Everybody has good (and even great) ideas, but nobody has real solutions. Why is it that all this good HIT hasn't translated into meaningful improvements in Healthcare?

For example. At first I thought the talk by Dr. Patrick Soon-Shiong might be heading somewhere interesting.  He presented a well structured view of the current Healthcare landscape that seemed to make a lot of sense. Then he plunged into the abyss with an in-depth discussion of transformational technologies (molecular data mining, Visual Evoked Potentials, etc.).  These developments could potentially lead to improvements in people's health, but we never got to hear how any of the complex Healthcare delivery issues were going to be addressed.

Among his many endeavors Dr. Soon-Shiong is Chairman of  the National Coalition for Health Integration (NCHI). I think the "Zone of Complexity" point of view (see here -- warning PDF) is a good starting point for understanding the position that Healthcare IT is in:

Also, following the diagram above is this statement:

However, currently, even when information is in digital formats, data are not accessible because they reside in different “silos” within and between organizations. In turn, the U.S. health system is hampered by inefficient virtual organizations that lack the mechanisms needed to engage in coordinated action.

The NCHI Integrated Health Platform (grid computing) is a good idea, but does it really even begin to provide the solution to these complex problems?

  1. They are taking a "bottom-up" approach to interoperability (system, data , and process) and trying to leverage existing technologies (like DICOM and HL7).  Makes sense. But other than academic or government institutions what's the incentive for private  companies (like EMRs) to participate?
  2. How is an improved underlying infrastructure going to reduce the chaotic nature of the health delivery system (hospitals, insurance companies, Medicare, etc.)? It's like putting the cart before the horse.

This is the dilemma. We can come up with clever and even ingenious technical solutions in our little IT world, but none of them are going to be game changers.   The availability of a great technologies are not enough to change the institutional processes that make an organization inefficient or communication ineffective.

The solution is in the people and the processes they follow. The best example I can think of is EMR adoption. Everybody knows why the rate of conversion from a paper to a paperless office is so low.  It's mostly because of people's resistance to change the way they've "always done it."  Change is hard, and in this case HIT is the barrier to adoption, no mater how good the EMR solution is.

At the national level Healthcare IT only enables interoperability and improved data management.  The chaos can only be solved by first changing U.S. Healthcare delivery policies.  Whatever the changes are, they will then determine the incentives and processes that actually drive the system and put HIT to use.

For Healthcare IT, the NCHI is just one example. There are a whole bunch of other technology-driven initiatives that also have high hopes.  I'm not saying we should stop developing great technologies.  We just shouldn't be surprised when they don't change the world.

Happy Presidents Day!

UPDATE (8/4/10): Martin Fowler's UtilityVsStrategicDichotomy post is another perspective on "IT Doesn't Matter".

*I thought the Webcast was very well done.  It had split screen (speaker and slides) along with multiple camera views that included the audience. The video quality wasn't great (it really didn't need to be) but the streaming was reliable.  Also, the web participants could chat among themselves and the on-site staff and ask the speaker questions.

Posted in EMR, FDA, Interoperability | Tagged , , , , , | 1 Comment

The Challenges of Developing Software for Medical Devices

Developing Software for Medical Devices – Interview with SterlingTech gives a good overview of the challenges that especially face young medical device companies. In particular (my emphasis):

Make sure that your company has a good solid Quality System as it applies to software development. Do not put a Quality System into place that you can not follow. This is the cause of most audit problems.

I couldn't have said it better myself, though I think that focusing on the FDA may distract you from why you're creating software quality processes in the first place. The real purpose of having software design controls is to produce a high quality, user friendly, robust, and reliable system that meets the intended use of the device.  If your quality system does that, you won't have to worry about FDA audits.

Since Klocwork is a static analysis tool company I also want to point out a recent related article that's worth reading -- and trying to fully understand:

A Few Billion Lines of Code Later: Using Static Analysis to Find Bugs in the Real World

Note the user comment by Bjarne Stroustrup.

UPDATE (2/9/10): Here's another good code analysis article:

A Formal Methods-based verification approach to medical device software analysis

Posted in FDA, Medical Devices, Programming, Software Quality, Tools | Tagged , | Leave a comment

Stackoverflow Overflow Update

In Stackoverflow Overflow I predicted 500,000 questions on 2/7/2010 at 5:31. When I checked (after the Superbowl -- congratulations to NO!) at 19:42 this evening:

Not bad. Only 14 hours off on a three month linear extrapolation from only two weeks of data!

Posted in Programming | Tagged | Leave a comment

The BCI X Prize

As announced at a recent MIT workshop: The BCI X PRIZE: This Time It’s Inner Space:

The Brain-Computer Interface (BCI) X PRIZE will reward nothing less than a team that provides vision to the blind, new bodies to disabled people, and perhaps even a geographical “sixth sense” akin to a GPS iPhone app in the brain.

As I've discussed many times (e.g. BCI: Brain Computer Interface), "mind reading" with EEG is a huge challenge. Another hurtle they have to overcome:

The foundation must court donors to make the $10 million+ prize a reality. Once funding is secured,...

That will be the easy part.

The problem with the X Prize incentive approach is one of expectations.  If people believe that Avatar-like advances ("new bodies") is a realisitic result, they will be sorely disappointed.

Even though I'm a certified "mind reading" skeptic I think great BCI strides will inevitably be made. The good news is that these innovations will provide numerous benefits for handicapped individuals.

UPDATE (2/5/10): Here's a great example: Technology Behind Second Sight Retinal Prosthesis

Posted in EEG, HCI, Medical Devices, Technology | Tagged , | Leave a comment

More on the Zeo Personal Sleep Coach

Even though it has been over 6 months, my Zeo scam post is suddenly getting some comment traction. I thought I'd respond to some of these as well as clarify my thoughts.

I'm not really sure why Krunz thinks I'm an idiot.

First, I never said that changes in life style do not affect the quality of sleep. They do indeed. For example, for OSA (obstructive sleep apnea):

Some treatments involve lifestyle changes, such as avoiding alcohol and medications that relax the central nervous system (for example, sedatives and muscle relaxants), losing weight, and quitting smoking. Some people are helped by special pillows or devices that keep them from sleeping on their backs, or oral appliances to keep the airway open during sleep.

Second, I did say that because the ZQ score is based on sleep staging (no matter how crude), I can believe that an increased ZQ is indicative of better sleep.

My problem with the Zeo device is the claim that ZQ score improvement is caused by any particular life style change. This would be very difficult to validate.

Let's say you recorded your ZQ score for 30 days without making any life style changes. There will be an inherent variability in the ZQ score that results from a variety of sources -- electrode placement, user movement during sleep, etc.  Unless an introduced life style change can make a statistically significant change to the ZQ score you can not attribute causality to it. And even if there was a significant ZQ change you would still need to somehow prove that there were no other factors involved.

RobertF was not only more civil, but he took the time to detail his opinions and asked some good questions about mine. Here are my responses:

1. The red flag for me is when you make unvalidated claims. Anecdotal evidence of an improved ZQ score though "sleep coaching" is not validation.

The same is true for the "alarm clock" functionality. Zeo and others (e.g. Actigraphy) make similar claims about waking during lighter periods of sleep reducing sleep inertia.  Again, there is anecdotal evidence and even some testable theories, but a lot more research needs to be done in this area.

2. On one hand, Zeo does not claim that this is a therapeutic or a diagnostic device. From their web site:

The Zeo Personal Sleep Coach is neither a medical device nor a medical program and is not intended for the diagnosis or treatment of sleep disorders.

On the other hand they also say the Zeo provides:

...personalized sleep information and customized action steps to improve your Sleep Fitness™

Think about it. In one breath they say it isn't, and then in the next they say it is!  It has nothing to do with FDA approval per se. It's the contradictory claims that bother me.

3. I don't think I was incredulous about this. I've mentioned several times that I think the dry sensor EEG technology probably provides enough signal quality to do a reasonable job of sleep staging.

4. It is the lack of “clinical validation” that is the most problematic for me. Basic sleep research may be heading in this direction -- combining "sleep science, sleep education, neuroscience, behavioral psychology,..." -- but they still have a long way to go.

5. Why would anybody that doesn't have a sleep problem buy this product? If I did have sleep problems $350 would probably be well worth it, as long as it actually worked!  Sergey agrees.  If it didn't work, I'd want my money back -- and so would you.

6. You may not have any connection to the Zeo company, but the advisory board members are all paid to be there. I am not saying that this in any way lessens their scientific or professional credentials. On the contrary,  a good advisory board should be asking tough questions and doing their best to improve the product.

Wrap Up:

OK, I'll admit it.  Maybe "scam" is too harsh. The reason I chose that term was because of its definition as a confidence trick:

an attempt to defraud a person or group by gaining their confidence

When I first read about the Zeo I felt that their presentation of the sleep science and technology was an attempt to gain a customer's confidence. Beyond that there was little evidence that this product would help people.

I do not have any ill will towards Zeo. It appears that their customer service and return policies (30 day full refund) are good. As a medical device developer, I'm just pointing out what I think are important issues about this device.

I still maintain that the claims made by Zeo are misleading. You need to be able to show scientific evidence that the ZQ score actually does track with life style changes. Unless I'm missing something, Zeo has not done this.

Posted in EEG | Tagged , , , | 9 Comments

Depth of Anesthesia Reality Check

I think this is the first time I've ever seen MedGadget express such a strong opinion about a technology.

Masimo Invests in Anesthesia Awareness Technology. Good Move? We Don't Think So doesn't pull any punches.

What's interesting to me is that SEDLine was Hospira's brain function monitoring business (see here).  Hospira bought the technology from a Boston-based company called Physiometrix in 2005.

Back in my EEG days I had a chance to work with Physiometrix. We interfaced with their EEG front-end hardware in an attempt to develop an OEM relationship.  At the time, they were using essentially the same Bispectral index (BIS) technology as Aspect Medical.  The only other thing I remember is that they were also using QNX.

MedGadget's skepticism seems well founded. On the other hand, the people at Masimo (a couple of which I know) aren't dummies . They may know something the rest of us don't.

Posted in EEG, Medical Devices | Tagged , , , | Leave a comment

Ch-ch-ch-changes

About the only thing you can count on in this world, besides taxes and death, is change.

When we moved from Madison to San Diego in 2005, that was a big change. Of course in Jan/Feb the 70 deg temperature difference makes that decision seem pretty smart. When our 12 y/o golden retriever Miles passed away this past Oct. that change really sucked.

Switching jobs is also a big change.  As I've previously discussed, my old company was purchased and I chose not to relocate. As soon as wrote the words "in-the-trenches" I had an inkling that I had probably jinxed myself. Maybe jinxed isn't the right word, but I certainly ended up in a different situation than I had imagined.

Last week I started working as a Health Informatics Architect at ResMed, a global leader in sleep medicine and non-invasive ventilation.  Like all medical device companies, ResMed is faced with the daunting challenge of providing the therapeutic data produced by their flow generators to physicians and healthcare organizations.

This position will allow me to continue to develop solutions for medical device interoperability, but at a whole new level. Working with a global team at a world-class company is a very exciting opportunity. I'm looking forward to the challenges ahead.

This change is good!

Posted in General, Medical Devices | Tagged , | 3 Comments

Actigraphy for Better Sleep?

I previously questioned the efficacy of the Zeo "Personal Sleep Coach" and concluded that this device would be unlikely to provide their claimed sleep improvements.

Another method for monitoring sleep patterns is with the use of Actigraphy*. I seriously doubt that these movement-based devices can do any better:

At least the Zeo device uses an EEG-based sleep histogram for determining sleep state. How can the acitigraph tell the difference between someone just laying awake quietly versus deep sleep?

*This Wikipedia entry reads like an advertisement for one these devices!

Posted in EEG | Tagged , , , | 2 Comments

Dear Prospective Employer,

workerBased on the job description, I am a perfect candidate for this position...

As I've previously discussed, my company was sold this past summer. Since then they announced that our operation will be moved to Seattle by the end of the year. SonoSite has been very professional and generous, but I have decided to stay in San Diego.

I made this decision several months ago, but since I will be employed until the end of the year, I have not been very active in my job search. Until now.

So, if you're reading this you may very well be an employer looking to hire someone like me. You might have gotten here from my Stack Overflow Careers CV or even directly from my resume.

There is one question that I can answer up-front:

Q: What are your long-term career goals? More specifically, do you want to do development or do you want to be a software project manager?

A:  This is the fork in the career road that most software engineers eventually get to. I've done both and my preference is in-the-trenches software design and development. I get the most enjoyment from building solutions in a collaborative team environment.

Thank you for your consideration.

Sincerely,

Bob

If you're also looking for a job, I wanted to share a little.

About a month ago I came across a "Principal Software Engineer" position that I thought fit my skills and interests pretty well. I submitted my resume and got a full day interview a couple of weeks later. I hadn't done an interview in over four years. Here are some of the highlights:

  1. I was asked the usual technical programming questions. Mostly about .NET/C#, e.g. see Dot Net Interview Questions. Since I've asked the same questions to prospective employees a number of times, I think I did pretty well on these.
  2. The software design problem was also pretty typical. How would you design a 4-way stop light control system?  Hint: Ask about requirements. Even though you have assumptions about how something this familiar works, others may have a very different perspective.
  3. The dreaded logic question. I got The 8 ball problem. I hate these things.  I eventually got to the 3-try solution, but the 2-try was beyond my cognitive powers. Oh well.

Even though I was not offered the job, the overall experience was generally good (the rejection part sucked).  I think their definition of "Principal" was different than mine.

Every company has different interviewing techniques and practices.  It seems that large companies have developed the most rigorous (and onerous) methods. Google is known for its over-the-top questions: 15 Google Interview Questions That Will Make You Feel Stupid. A more pragmatic approach, e.g. How I Hire Programmers, makes sense: "Are they smart? Can they get stuff done? Can you work with them?". I'm not sure many companies can afford to invest that much in interviewees though.

Speaking of "Are they smart?", Jonah Lehrer's article Vince Young talks about the relationship between an IQ test and the performance of NFL quarterbacks.  I think the same basic concept applies to developing software products. As important as writing good code is, each engineer must also be able to understand the business needs and really listen to marketing/sales and of course the customer ("emotional intelligence").  There is no IQ test for that.  "Genius is one percent inspiration and 99 percent perspiration" (Thomas Edison) also applies.

Interviewing is a two-way street so I would be remiss if I didn't mention The Joel Test: 12 Steps to Better Code. Don't forget to ask good questions.

Anyway...

Just like the rest of the job market these days, the competition for all types of developer positions is also pretty intense. The trick will be finding that perfect match between my skills and the employer needs and environment. We'll see how it goes. Wish me luck!

UPDATE (12/3/09):  The Codypo Test, aka 8 Questions To Identify A Lame Programming Job

Posted in General | Tagged | 3 Comments