Pascal Cuoq at Frama-C continues his discussion of static analysis for medical device software. This is part 2 of 3. Part 1 is here.
In the second part of this article I write about methodology, where tools and engineering come together to produce software that you can entrust with lives. I do not avoid talking about the work my colleagues and I do, but I do mention the work of others too.
The layman often assumes that it must be impossible to make software that works as intended. It is a natural conclusion to draw from one's experience with personal computers, mobile phones, car on-board computers and vending machines. The layman's opinion is biased because for most people, embedded software is the means, rather than an end, and therefore is never noticed when it works. For instance, my own digital reflex camera contains a fair amount of software. Still, I have never observed it to deviate from the behavior described in the thick manual that came with it -- there are some particularities that I would call functional bugs, but since the manual describes them at length, as the old joke goes, they are features. Software that works is not impossible. It is only that, as the regretted Douglas Adams would put it, software that doesn't work is slightly cheaper. Moderately large software systems that work well enough not to be noticed can be produced. It is "only" a matter of having simple rules that enforce readability of the developed code by people who have not written it, and an appropriately sized budget for code reviews and quality assurance (usually testing, but bug-finding software analyzers are used here too, and they would be used more if their strengths were not so widely misunderstood). This statement does not include very large codebases and concurrent systems, that we still aren't very good at building reliably but keep trying anyway.
The specification for my digital camera is the thick manual, although there are also internal specifications for sub-components of the camera's software that I, as an end user, do not get to see. The internal specifications naturally tend to be more technically detailed as they deal with smaller and smaller sub-components. As components are assembled, it becomes possible to check that the corresponding specification for the sub-assembly is satisfied. This method is called the V-Model of software development, although one wonders why it needs such a high-sounding name: almost every manufactured physical object has been built from sub-components with pre-determined specifications since time immemorial.
This has nothing to do with the production of critical code. Or rather, the two components above, development according to an enforced development standard, and quality assurance (debugging), remain but become a small part of the picture in the development of critical code. Two additional components, at least as large as the first two, are the certification and the authority.
Certification is the additional, reflexive examination of the development, verification (i.e. the software conforms to specification) and validation (i.e. the specification corresponds to the actual need) processes.
One difference between software and hardware is that it is harder to make sure that software satisfies the original requirements. This was made very clear in the article that prompted this series of blog posts. And this is why critical software particularly needs certification. Certification is not so much the testing of the software against the specification (this is called "debugging" and it's not specific to critical software) but a cohesive list of arguments leading to the conclusion that whatever testing has been done was sufficient to find any possible flaw with the expected confidence. A certification file does not state "we used this development tool and we ran these 1000 tests for this component" but "we used this development tool, and here are the reasons why we think it's acceptable. Here are the reasons why we think that these 1000 tests are sufficient to ensure that this component works as expected (and, incidentally, here are the tests and their results)". As you would expect, when a static analyzer is used, the certification file does not read "Here is the tool we used and the results we obtained" but "Here is the tool we used. Here is how we established that this tool could reliably be used to ensure this aspect of the requirements, (and incidentally, here are the results we obtained)".
The authority defines the expectations for the certification, and studies the certification file once submitted. In the end, it all comes down to convincing the competent, financially disinterested humans who check the certification file that all the necessary steps have been taken to ensure the safety of the critical device.
We now arrive to the first statement I disagree with from the article, that in static analysis of software, "achieving a 100% recall rate is rare, if not impossible, and may only be possible at the cost of a very high number of false positives"
First, a 100% recall rate corresponds to the absence of false negatives, which is a perfectly achievable objective. Static analyzers with this property are called "correct" (or "sound"). These adjectives have meaning only in a context where it is clear what bugs are being looked for and what assumptions are made to this end. Assuming this context is unambiguous, they mean that as long as the tool's assumptions are respected, no bug in the analyzed program is left undetected.
Two examples of commercially available static analyzers that have been designed from the ground up to have no false negatives are PolySpace, now distributed by The MathWorks, and Astrée, soon to be distributed by AbsInt. Allow me, however, to translate the sentence "Astrée is capable of producing exactly zero false alarms" from that web page: "false alarms" mean "false positives". Astrée, by design, does not have any false negatives. If it failed to notice a possible run-time error, it would be a bug which, I am sure, would be promptly fixed. The "no false positives" claim only means that it does not have any false positives on some pre-determined representative pieces of software. It is certainly not a guarantee, since, as stated earlier, it is a mathematical impossibility for a static analyzer to reach a verdict for any analyzed program with neither false positives nor false negatives. The best way to determine the number of false positives you can expect Astrée to produce for your code is, as with any other analyzer, to try it.
Now, except in the magical world of marketing, it is indeed true that the less false negatives are allowed in the results, the more false positives can be expected to be found. This dilemma is the same that occurs every time something can only imperfectly be detected. Considering the target readership for the blog of my kind host, I do not think that I need to harp on this. But, if the medical test analogy does not work for you, consider the example of the shoestring eyelets on my shoes, which cause the metal detectors in international airports to ring almost every time (false positive) because it has become unacceptable in the last few years to have the slightest risk of a weapon going undetected through the controls (false negative).
Every system has its assumptions: in the case of the airport detector, one is that a weapon is assumed to include some metal. This is a good opportunity to introduce in passing another distinction: "safety" works against the physical world (failures, birds flying into reactors, ...). "Security" works against conscious opponents who are actively trying to use your assumptions to their advantage. This distinction can be applied to software analysis but it is more general than that. Still, even if what you are doing is categorized as "safety", if it's critical, you have to be aware of your assumptions. So the two disciplines are not always very different in philosophy, although they often aim at different objectives.
Thanks to a number of recent advances on the theoretical side, as well as the increase in the computational power available in the workstations where the analysis takes place, you can expect the number of false positives given by a correct static analyzer on your embedded code to be contained. It would be cautious to disbelieve claims that there won't be any.
In addition to the above two static analyzers, I can mention Caveat, another static analyzer without false negatives that has been developed in the laboratory where I work. Caveat is commercially available, although we do not advertise it because it is targeted to very high criticity software that does not concern many (we consider it to be most useful for code with a criticity comparable to level A, the highest in the DO-178B avionics certification standard). Since I am in a mood to take single sentences from web pages and comment on them, please allow me to do it once more: the sentence "[Using Caveat, Airbus France's] goal is to detect errors as soon as possible in the development cycle, and not to prove the software" was written at a time when Airbus France was indeed experimenting with Caveat as a R&D project. This sentence is now completely obsolete. Caveat has been officially used for part of the verification of part of the software of the Airbus A380 — that is, precisely, to establish beyond doubt certain properties about the analysed source code, and in substitution to the unit tests whose role would have been to establish these properties in a more traditional process. As the DO-178B standard mandates, Caveat has been qualified by Airbus as a verification tool to be used for the certification of this particular software.
Also from this laboratory, there is Frama-C, which is available too since it's Open Source. Frama-C is a research prototype to which the experimentation of new ideas has shifted (while Caveat is still being maintained for Airbus and any industrial user who requires it). Frama-C is more of a framework for static analyzers than a static analyzer per se. The analyzers that have been developed in Frama-C so far rely on various techniques but they are all without false negatives. Some of these analyzers are now reliable enough to be considered for R&D experimentation. Caveat was a research prototype too at the time Airbus decided to use it in production and to make it part of its certification process. Whether or not the tool you intend to use comes in a cardboard box, you will have to explain the measures you took to ensure that it was the right tool to use for what you were using it for. What it is called matters less than the measures you took.
The second statement from the article I disagree with is that "static analysis is intended to supplement and improve the effectiveness of existing best practices in testing. It should not be thought of as a substitute for device developers' current testing activities". Of course, if you are using a bug-finding static analyzer with false negatives, you will have a hard time justifying why you removed a single test from those you would have done without the analyzer. Such a tool is most useful in the debugging phase, to identify and remove bugs as quickly as possible, not in the verification phase of a process subject to certification. But when Airbus used Caveat for the A380, it was precisely in substitution to existing unit tests. The fact that Caveat is designed not have false negatives was one of the arguments in the validation of Caveat as a verification tool to establish the properties that were previously guaranteed by these unit tests, with the required confidence.
Another way to look at this question is the following: bug-finding static analyzers (that have false negatives) have the potential to be better for debugging than sound analyzers (without false negatives) because by accepting to emit false negatives, they can reduce the number of false positives (and save the user time). This debugging phase can be, and often is, lightly covered in the certification because it is later followed by verification, which is the important second check. In a certification-covered verification process, the bugs have already been ironed out and the engineers are not trying to find more bugs but to prove that there aren't any. Any positive is going to be a false positive in this context, even if it comes from the most cautious heuristic tool (a tool that makes a lot of effort to warn only when it is quite certain that a problem exists). On the other hand, during the certified verification process, a heuristic tool's contribution to the bottom line is harder to quantify, since the objective of verification is not to find bugs but to establish that there aren't any.
The statement that there aren't any bugs left when certification starts may look like an exaggeration, but it isn't. If the certification requirements are stringent, changing any part of the code (to fix a bug) means starting the verification from scratch. This is a protection against, among other things, the dangers of C that were alluded to in the first part of this article. If you find bugs at that stage, you are not doing it optimally from the economic point of view (and you are starting afresh a heavy, certification-covered verification process in which, hopefully for you, you will not discover any new bug this time).
I would like to acknowledge the careful editing of my host, the suggestions of my colleague Virgile Prevosto in writing part 1, and the remarks of both my supervisor Benjamin Monate and David Delmas (Airbus France) concerning the present part 2 of this article. The third and last part of this series will be on the topic of formal functional specifications, one of the under-used new tools that have a contribution to make in the verification of critical software. In conclusion, here is a quoted statistic in the style, if not the spirit, of Douglas Coupland's Generation X:
Number of human lives whose loss has been attributed to software failure of a civil airplane: 0
Here is a (very) partial list of software failures in commercial aircraft which did *not* lead to human fatalities, courtesy of the FAA’s database of Airworthiness Directives:
http://rgl.faa.gov/Regulatory_and_Guidance_Library/rgAD.nsf/WebSearchDefault?SearchView&Query=software
http://en.wikipedia.org/wiki/Airworthiness_Directive
I was not aware that this was on the internet before, and I only knew of the hardware design methodology conclusions, but Richard P. Feynman’s report on the NASA Space Shuttle has a section on software methodology. If you were interested enough to read this far, it is worth taking a look:
http://tr.im/RPFeynman
When trying to explain, in the above article, that during verification, statistically all alarms turn out to be false alarms, I wish I had quoted these sentences from the report:
“A discovery of an error during verification testing is considered very serious, and its origin studied very carefully to avoid such mistakes in the future. Such unexpected errors have been found only about six times in all the programming and program changing (for new or altered payloads) that has been done.”
Feynman does not start a discussion of false positives in his report because the Shuttle software was verified exclusively by testing, and testing (considering one execution path at a time) does not have false positives by construction, so to say.
embedded software is quite in demand these days coz more devices use microcontrollers _
Pingback: Guest Article: Static Analysis in Medical Device Software (Part 3) — Formal specifications | Bob on Medical Device Software