Archive for June, 2009

Cloud Computing Design Patterns

Sunday, June 28th, 2009

I attended some talks this weekend at the SoCal Code Camp.  Since I've been exploring cloud computing lately, the David Pallmann talk on Azure Design Patterns was of particular interest.

The Azure Design Patterns site gives an overview of the Azure services ("Core"), but it was the composite applications that combine these core functions that provided the most insight regarding potential cloud applications.

For example the Hosted Web Service with Background Workers is depicted like this:

Hosted with Background Workers

Dave spent a majority of the talk on Azure core services.   The differences and similarities between Azure and Amazon Web Services and Google App Engine were easy to identify.

The Azure core services are interesting, but I would have liked a more thorough investigation of these application patterns and their implementation details. There was just too much material to cover in a 1.5 hour talk.

It's easy to see how many of these application patterns could be implemented in either AWS or GAE.  Understanding a pattern's pros and cons in the context of any of the available cloud computing solutions is critical when you're considering an architecture.

I haven't been able to find  similar design documentation by AWS or GAE. They only cover their core service APIs and provide white papers on how specific applications are constructed.

Kudos to Dave for the great talk and putting together these useful descriptions and code samples.

UPDATE (11/25/09): Cloud Computing Patterns

Is the Zeo “Personal Sleep Coach” a Scam?

Monday, June 15th, 2009

zeo
It's hard to believe that MedGadget covered the Zeo Personal Sleep Coach as if it were a real medical device.

At first the technology looks intriguing:

  • Softwave™ Sensor Technology (wireless)
  • Bedside Display
  • Web and Database Technology

So far so good. Then you get:

  • Personalized Sleep Coaching Program
  • SmartWake™ Alarm (optional)

RED FLAG!!!

and the Zeo 7 Steps to Sleep Fitness are:

  1. Evaluate your Sleep Fitness
  2. Relax your way to sleep
  3. Build your bedroom sanctuary
  4. Optimize your sleep schedule
  5. Adopt the Power Down Hour™
  6. Eat and drink smart for sleep
  7. Harmonize with your housemates

All for only $399 (with free shipping).

Now the fine print:

Zeo Personal Sleep Coach is neither a medical device nor a medical program and is not intended for the diagnosis or treatment of sleep disorders. If you suspect that you may have a sleep disorder, consult your physician.

Is this for real?  I guess I shouldn't be surprised -- No FDA approval.

I can believe that these sensors are capable of collecting EEG that could be used for sleep staging.  But even that hasn't been proven true. An abstract accepted  for presentation is interesting but is not validation. The technology here is the confidence part of the trick.

They claim to use the sleep histogram (personal sleep score or "ZQ") along with on-line analysis as a metric for determining if any of the  7 "Sleep Fitness" steps are actually helping.  Based on normal ranges of sleep stage percentages during the night these metrics may well tell you if a person slept "normally", but can ZQ changes really be attributed to some lifestyle alteration? Where's the clinical validation for this?

Also, the technology is supposed to:

find what could be a “natural awakening point” - when it could be a little easier to get out of bed in the morning.

It could, huh?

Anyone that would shell out money for a product like this probably has a real sleep disorder and should see a medical professional for evaluation.

Most sleep disorders are caused by apnea events anyway. A real ambulatory polysomnography (PSG) system (e.g. the Somté) includes EEG, EOG, and a full complement of breathing parameters (airflows, pressures, and SaO2).

To me anyway, the Zeo device and program will help very few people and appears to be another direct-to-consumer rip-off.

UPDATE (1/17/2010):  More on the Zeo Personal Sleep Coach

Plunging into Web Development

Sunday, June 7th, 2009

ConanI've authored a few web sites. Nothing professionally though. I know just enough HTML, CSS, and JavaScript to be dangerous.

Now I'm faced with creating a customer-facing site that has (or will someday soon have) real requirements.

Here are a couple of the requirements I know so far:

  1. Relatively low volume traffic. The site will be public, but only registered users (customers) will have access.  No product pages, no shopping carts, no ads, no social networking. The front page is a login screen.
  2. Reliable and secure transport and storage of medical data.  At a minimum we must comply with HIPAA standards (privacy rules).

I don't see web site development as really that different from building any other type of application. It's all software. The architectural building blocks may be different, but the developer's mind-set and methodologies for producing a quality product need to  be the same.

I haven't gotten far enough along to really understand all of the deployment and maintenance issues. I'm thinking about them though. The same goes for testing. I can foresee development vs. production platform testing issues that will have to be carefully considered.

What I want to do is walk you through my rational for the selection of some of the major components and tools I'm considering using for this project.

Web Frameworks

Here's a little historical perspective on selecting a web development framework:

choosingwebframework

Yep, that's how it feels.  There are at least 100 options (plus a couple of my additions):

AgaviAIDA/Web | Ajile | Akelos | Apache ClickApache CocoonApache StrutsApache WicketAppFuseAraneaASP.NET MVC | Axiom Stack | BFCCakePHPCampingCatalystCherryPyCodeIgniterColdSpringCSLACppCMSDjangoDotNetNukeDrupal | ErlyWeb | eZ ComponentsFlex | FUSE | FuseboxGoogle Web ToolkitGrokGrailsHamletsHordeInterchangeItsNatIT Mill ToolkitJavaServer Faces | Jaxer | JBoss SeamKepler | Kohana | Lift | LISA | ManyDesigns PortofinoMasonMaypoleMach-IIMerbMidgardModel-GlueMonoRailMorfikNitroonTapOpenACSOpenLaszloOpenXava | Orbit | PEAR | Orinoco | PyjamasPylonsQcodoRadicoreReasonable Server FacesRIFERuby on RailsSeasideShale | Simplicity | SilverStripe (Sapphire)SmartClientSofiaSPIPSpringStripesSymfonyTapestryThinWire | Tigermouse | VaadinTurboGearsWavemakerweb2pyWebObjectsWebWork | Wigbi | YiiZendZK | Zoop | Zope 2Zope 3ztemplates

YIKES!!

As a .NET developer, my first inclination was to look at ASP.NET MVC. The two most popular and active open source frameworks are  Ruby on Rails (RoR) and Django (Python-based). To be honest, I have not spent a lot of time investigating any of the others.

Why is it that I often find myself in this situation? It's usually not 100, but there always seems to be multiple well developed solutions for these types of problems.  I ran into the same thing a couple of years ago when I was selecting an ORM for a .NET project.

All you can do is start by taking the advice of others ("most popular") and give one or two a try.  Not only will you get a good sense of how well the framework meets your project requirements, since there will inevitably be problems or questions you'll also be able to evaluate documentation and community activity.

It's like making pasta -- you throw a noodle against the wall and if it sticks, you're done cooking.  Well, not really... but you know what I mean.

Hosting

One of the major considerations is hosting. I've previously explored the three major cloud computing platforms.

  • Amazon EC2 would be overkill (see requirement #1). I don't see a need for significant scale-up in the foreseeable future. Running a small on-demand EC2 instance 24/7 is more expensive (~$70/month) than just buying hosted services.  Also, supporting a complete OS platform is unnecessary work.
  • Microsoft Azure is currently in CTP (Community Technology Preview) and it's still unclear what the pricing will be.
  • That leaves Google App Engine.  Based on the GAE Quotas, we would be able to operate under the limits for quite a while (exceeding the quotas would be a good thing).  That means GAE can provide us free hosting, which is hard to beat.

There are literally 100's of hosting options, and most would meet our bandwidth and storage requirements at a nominal cost.  Independent of storage (see below) I guess I'm biased towards a cloud solution for two reasons:

  1. "Good Enough" isn't Good Enough: I've been hosting this domain on a commercial site for about 6 years.  I'd classify my host as good enough for my personal use (family site, photo gallery, this blog, etc.).  If my hosting service went away tomorrow, no big deal. I backup everything regularly and could be up and running on a comparable host pretty quickly. But for business purposes that involve critical customer medical data, "good enough" and the possibility of the host disappearing just doesn't cut it.
  2. Large Infrastructure: This is what makes a cloud solution so attractive. With any of the three cloud options you are buying into reliability and stability. They already have multiple data centers, security, and disaster plans in place.  You don't have to worry about Amazon, Microsoft, or Google going away any time soon. Unless you have the resources to build it yourself, IMO using a cloud service is a good business decision.

So for now I'll be using Google App Engine.

Data Storage

Now lets looks at requirement #2: reliable and secure data storage. At this time the best solution seems to be Amazon S3. Amazon has already put a lot of thought into this:  Creating HIPAA-Compliant Medical Data Applications with Amazon Web Services (warning: PDF).  S3 transfer and storage costs are very reasonable.  Paying only for what you use is a real benefit.

Both Google and Microsoft are very active in the Healthcare sector (Google Health and HealthVault) and I'm sure will soon have cloud storage offerings with similar features.

There are a number of web hosting sites that claim HIPAA data storage compliance, but most seem to just be using "HIPAA" as a marketing tool to attract medically related clients. I'd stay away from these.

Web Frameworks (part 2)

Deciding to use GAE quickly narrows the web framework choice down. GAE supports Python (w/ Django) and the Java 6 runtime environment. I do not believe that either ASP.NET or RoR are supported on GAE. Done deal -- Django.

I know what you're thinking.  There are many other Python-based web frameworks and even Java alternatives that I should be considering. That's true, but Django is arguably the most popular and has a very active developers community. Also, there are several Google Code App Engine projects (see below) that support Django integration.

I did play around with RoR . The Ruby language itself is great. I love having five different ways to do the same thing. The RoR web framework is mature and has many of the same features as Django.

I looked at ASP.NET MVC, but only from a distance. Here's a concise take from someone that recently jumped in: ASP.NET MVC Impressions after 1 week.

Development Environment

I initially setup a Windows-based Python/Django/GAE-SDK development environment but found it to be too clumsy.  I've settled into Ubuntu 9.04 running in a VirtualBox VM.

The Ubuntu Package Manager handled installation of all the necessary prerequisite components. Now that I think of it, I didn't have to do a single ./configure and make. That's progress!

I'm an old Unix hack and I quickly fell back into my first love : Emacs. After the nostalgia wore off, I needed to find a real development IDE.  There were two choices:

  1. Eclipse:  I tried using the PyDev plug-in along with some Django integration instructions I found. Google also provides some Eclipse integration, but being able to start the server and other functions from the IDE was not that important to me.  I'd rather use the command line. Also, Eclipse just seems like a real dog.
  2. Netbeans:  With the Python plug-in Netbeans works fine, so I'll stick with it until something better comes along.

Django (Front-end)

The four features that make  Django attractive:

  • Object-relational mapper: Define your data models entirely in Python. You get a rich, dynamic database-access API for free — but you can still write SQL if needed.
  • Automatic admin interface: Save yourself the tedious work of creating interfaces for people to add and update content. Django does that automatically, and it's production-ready.
  • Elegant URL design: Design pretty, cruft-free URLs with no framework-specific limitations. Be as flexible as you like.
  • Template system: Use Django's powerful, extensible and designer-friendly template language to separate design, content and Python code.

Carefully walk through the four part Django tutorial. Beware: there are three versions of the tutorial (0.96, 1.0, and "Latest"). Make sure you're using the desired one.

For Django integration with GAE I'm using app-engine-patch.  I had first tried Google App Engine Helper for Django, but I found that app-engine-patch works much better.

Data Integration (Back-end)

Getting data to and from the S3 server will be a critical component.  I have only started looking into this, but the Amazon documentation seems very good.  The Getting Started Guide examples are presented in multiple languages (PHP, C#, Java, Perl, Ruby, Python).  A Python interface to Amazon Web Services, Boto, also looks like it might be useful.

Amazon S3 POST is an efficient way to move data to S3:

S3 Post

The back-end will require much more investigation.

For the additional database needs (account management, logging, auditing, etc.) I'll just use the GAE Datastore.

Overwhelmed

There's a lot of "stuff" here. Investigating and evaluating it all plus making decisions is a daunting process.

The purpose of going through these selections is to reduce the number of variables so I could start concentrating on an architecture and design that will meet project requirements. There are still many unknowns though, and I'm sure there will be major bumps in the road that will cause me to change direction.

UPDATE (11/21/2010): Beware -- you get what you pay for!: Goodbye Google App Engine (GAE)

Guest Article: Static Analysis in Medical Device Software (Part 2) — Methodology

Thursday, June 4th, 2009

Pascal Cuoq at Frama-C continues his discussion of static analysis for medical device software. This is part 2 of 3. Part 1 is here.

In the second part of this article I write about methodology, where tools and engineering come together to produce software that you can entrust with lives. I do not avoid talking about the work my colleagues and I do, but I do mention the work of others too.

The layman often assumes that it must be impossible to make software that works as intended. It is a natural conclusion to draw from one's experience with personal computers, mobile phones, car on-board computers and vending machines. The layman's opinion is biased because for most people, embedded software is the means, rather than an end, and therefore is never noticed when it works. For instance, my own digital reflex camera contains a fair amount of software. Still, I have never observed it to deviate from the behavior described in the thick manual that came with it -- there are some particularities that I would call functional bugs, but since the manual describes them at length, as the old joke goes, they are features. Software that works is not impossible. It is only that, as the regretted Douglas Adams would put it, software that doesn't work is slightly cheaper. Moderately large software systems that work well enough not to be noticed can be produced. It is "only" a matter of having simple rules that enforce readability of the developed code by people who have not written it, and an appropriately sized budget for code reviews and quality assurance (usually testing, but bug-finding software analyzers are used here too, and they would be used more if their strengths were not so widely misunderstood). This statement does not include very large codebases and concurrent systems, that we still aren't very good at building reliably but keep trying anyway.

The specification for my digital camera is the thick manual, although there are also internal specifications for sub-components of the camera's software that I, as an end user, do not get to see. The internal specifications naturally tend to be more technically detailed as they deal with smaller and smaller sub-components. As components are assembled, it becomes possible to check that the corresponding specification for the sub-assembly is satisfied. This method is called the V-Model of software development, although one wonders why it needs such a high-sounding name: almost every manufactured physical object has been built from sub-components with pre-determined specifications since time immemorial.

This has nothing to do with the production of critical code. Or rather, the two components above, development according to an enforced development standard, and quality assurance (debugging), remain but become a small part of the picture in the development of critical code. Two additional components, at least as large as the first two, are the certification and the authority.

Certification is the additional, reflexive examination of the development, verification (i.e. the software conforms to specification) and validation (i.e. the specification corresponds to the actual need) processes.

One difference between software and hardware is that it is harder to make sure that software satisfies the original requirements. This was made very clear in the article that prompted this series of blog posts. And this is why critical software particularly needs certification. Certification is not so much the testing of the software against the specification (this is called "debugging" and it's not specific to critical software) but a cohesive list of arguments leading to the conclusion that whatever testing has been done was sufficient to find any possible flaw with the expected confidence. A certification file does not state "we used this development tool and we ran these 1000 tests for this component" but "we used this development tool, and here are the reasons why we think it's acceptable. Here are the reasons why we think that these 1000 tests are sufficient to ensure that this component works as expected (and, incidentally, here are the tests and their results)". As you would expect, when a static analyzer is used, the certification file does not read "Here is the tool we used and the results we obtained" but "Here is the tool we used. Here is how we established that this tool could reliably be used to ensure this aspect of the requirements, (and incidentally, here are the results we obtained)".

The authority defines the expectations for the certification, and studies the certification file once submitted. In the end, it all comes down to convincing the competent, financially disinterested humans who check the certification file that all the necessary steps have been taken to ensure the safety of the critical device.

We now arrive to the first statement I disagree with from the article, that in static analysis of software, "achieving a 100% recall rate is rare, if not impossible, and may only be possible at the cost of a very high number of false positives"

First, a 100% recall rate corresponds to the absence of false negatives, which is a perfectly achievable objective. Static analyzers with this property are called "correct" (or "sound"). These adjectives have meaning only in a context where it is clear what bugs are being looked for and what assumptions are made to this end. Assuming this context is unambiguous, they mean that as long as the tool's assumptions are respected, no bug in the analyzed program is left undetected.

Two examples of commercially available static analyzers that have been designed from the ground up to have no false negatives are PolySpace, now distributed by The MathWorks, and Astrée, soon to be distributed by AbsInt. Allow me, however, to translate the sentence "Astrée is capable of producing exactly zero false alarms" from that web page: "false alarms" mean "false positives". Astrée, by design, does not have any false negatives. If it failed to notice a possible run-time error, it would be a bug which, I am sure, would be promptly fixed. The "no false positives" claim only means that it does not have any false positives on some pre-determined representative pieces of software. It is certainly not a guarantee, since, as stated earlier, it is a mathematical impossibility for a static analyzer to reach a verdict for any analyzed program with neither false positives nor false negatives. The best way to determine the number of false positives you can expect Astrée to produce for your code is, as with any other analyzer, to try it.

Now, except in the magical world of marketing, it is indeed true that the less false negatives are allowed in the results, the more false positives can be expected to be found. This dilemma is the same that occurs every time something can only imperfectly be detected. Considering the target readership for the blog of my kind host, I do not think that I need to harp on this. But, if the medical test analogy does not work for you, consider the example of the shoestring eyelets on my shoes, which cause the metal detectors in international airports to ring almost every time (false positive) because it has become unacceptable in the last few years to have the slightest risk of a weapon going undetected through the controls (false negative).

Every system has its assumptions: in the case of the airport detector, one is that a weapon is assumed to include some metal. This is a good opportunity to introduce in passing another distinction: "safety" works against the physical world (failures, birds flying into reactors, ...). "Security" works against conscious opponents who are actively trying to use your assumptions to their advantage. This distinction can be applied to software analysis but it is more general than that. Still, even if what you are doing is categorized as "safety", if it's critical, you have to be aware of your assumptions. So the two disciplines are not always very different in philosophy, although they often aim at different objectives.

Thanks to a number of recent advances on the theoretical side, as well as the increase in the computational power available in the workstations where the analysis takes place, you can expect the number of false positives given by a correct static analyzer on your embedded code to be contained. It would be cautious to disbelieve claims that there won't be any.

In addition to the above two static analyzers, I can mention Caveat, another static analyzer without false negatives that has been developed in the laboratory where I work. Caveat is commercially available, although we do not advertise it because it is targeted to very high criticity software that does not concern many (we consider it to be most useful for code with a criticity comparable to level A, the highest in the DO-178B avionics certification standard). Since I am in a mood to take single sentences from web pages and comment on them, please allow me to do it once more: the sentence "[Using Caveat, Airbus France's] goal is to detect errors as soon as possible in the development cycle, and not to prove the software" was written at a time when Airbus France was indeed experimenting with Caveat as a R&D project. This sentence is now completely obsolete. Caveat has been officially used for part of the verification of part of the software of the Airbus A380 — that is, precisely, to establish beyond doubt certain properties about the analysed source code, and in substitution to the unit tests whose role would have been to establish these properties in a more traditional process. As the DO-178B standard mandates, Caveat has been qualified by Airbus as a verification tool to be used for the certification of this particular software.

Also from this laboratory, there is Frama-C, which is available too since it's Open Source. Frama-C is a research prototype to which the experimentation of new ideas has shifted (while Caveat is still being maintained for Airbus and any industrial user who requires it). Frama-C is more of a framework for static analyzers than a static analyzer per se. The analyzers that have been developed in Frama-C so far rely on various techniques but they are all without false negatives. Some of these analyzers are now reliable enough to be considered for R&D experimentation. Caveat was a research prototype too at the time Airbus decided to use it in production and to make it part of its certification process. Whether or not the tool you intend to use comes in a cardboard box, you will have to explain the measures you took to ensure that it was the right tool to use for what you were using it for. What it is called matters less than the measures you took.

The second statement from the article I disagree with is that "static analysis is intended to supplement and improve the effectiveness of existing best practices in testing. It should not be thought of as a substitute for device developers' current testing activities". Of course, if you are using a bug-finding static analyzer with false negatives, you will have a hard time justifying why you removed a single test from those you would have done without the analyzer. Such a tool is most useful in the debugging phase, to identify and remove bugs as quickly as possible, not in the verification phase of a process subject to certification. But when Airbus used Caveat for the A380, it was precisely in substitution to existing unit tests. The fact that Caveat is designed not have false negatives was one of the arguments in the validation of Caveat as a verification tool to establish the properties that were previously guaranteed by these unit tests, with the required confidence.

Another way to look at this question is the following: bug-finding static analyzers (that have false negatives) have the potential to be better for debugging than sound analyzers (without false negatives) because by accepting to emit false negatives, they can reduce the number of false positives (and save the user time). This debugging phase can be, and often is, lightly covered in the certification because it is later followed by verification, which is the important second check. In a certification-covered verification process, the bugs have already been ironed out and the engineers are not trying to find more bugs but to prove that there aren't any. Any positive is going to be a false positive in this context, even if it comes from the most cautious heuristic tool (a tool that makes a lot of effort to warn only when it is quite certain that a problem exists). On the other hand, during the certified verification process, a heuristic tool's contribution to the bottom line is harder to quantify, since the objective of verification is not to find bugs but to establish that there aren't any.

The statement that there aren't any bugs left when certification starts may look like an exaggeration, but it isn't. If the certification requirements are stringent, changing any part of the code (to fix a bug) means starting the verification from scratch. This is a protection against, among other things, the dangers of C that were alluded to in the first part of this article. If you find bugs at that stage, you are not doing it optimally from the economic point of view (and you are starting afresh a heavy, certification-covered verification process in which, hopefully for you, you will not discover any new bug this time).

I would like to acknowledge the careful editing of my host, the suggestions of my colleague Virgile Prevosto in writing part 1, and the remarks of both my supervisor Benjamin Monate and David Delmas (Airbus France) concerning the present part 2 of this article. The third and last part of this series will be on the topic of formal functional specifications, one of the under-used new tools that have a contribution to make in the verification of critical software. In conclusion, here is a quoted statistic in the style, if not the spirit, of Douglas Coupland's Generation X:

Number of human lives whose loss has been attributed to software failure of a civil airplane: 0