Archive for the ‘Tools’ Category

Exploring Clojure (and FP vs OOP)

Sunday, January 27th, 2019

I've always viewed functional programming (FP) from afar, mostly because object-oriented programming (OOP) is the dominant development methodology I've been using (Java, Ruby, C#, etc.) for many years.  A majority of the articles I've read on FP have statements like this:

If you’ve never tried functional programming development, I assure you that this is one of the best time investments you can make. You will not only learn a new programming language, but also a completely new way of thinking. A completely different paradigm.

Switching from OOP to Functional Programming gives an overview of the differences between FP and OOP. It uses Scala and Haskell for the FP example code, but I think it still does a good job of getting the major concepts across:

I do not think that FP, or any single paradigm/language/framework/etc. for that matter, is a silver bullet. On the contrary, I'm a true believer in the "right tool for the job" philosophy.  This is particularly true in the software industry where there is such a wide variety of problems that need to be solved.

This view for programming paradigms is cute but is actually misleading:

As a developer, it's important to always to be learning new problem-solving approaches. I.e. adding new tools to your tool-belt. This will not only allow you to select the best solution(s) for the job, but you'll be better able to recognize the trade-offs and potential problem areas that might arise with any solution. I think understanding FP concepts will make you a better programmer, but not necessarily because you are using FP techniques and tools.

Functional Programming Is Not a Silver Bullet sums it up best:

Don’t be tricked into thinking that functional programming, or any other popular paradigm coming before or after it, will take care of thinking about good code design instead of us.

The purpose of this article is to present my own experiences in trying to use Clojure/FP as an alternative approach to traditional OOP. I do not believe there is anything here that has not already been covered by many others, but I hope another perspective will be helpful.

Lisp

I chose a Lisp dialect for this exploration for several reasons:

  1. I have some Lisp experience from previous projects and was always impressed with its simplicity and elegance. I really wanted to dig deeper into its capabilities, particularly macros (code as data - programs that write programs). See Lisp, Smalltalk, and the Power of Symmetry for a good discussion of both topics.
  2. I'm also a long-time Emacs user (mostly for org-mode) so I'm already comfortable with Lisp.
  3. This profound advice:

For all you non-Lisp programmers out there, try LISP at least once before you die. You will not regret it.

Lisp has a long and storied history (second oldest computer language behind Fortran). Here's a good Lisp historical read: How Lisp Became God's Own Programming Language.

I investigated a variety of Lisp dialects (Racket, Common Lisp, etc.) but decided on Clojure primarily because it has both JVM and Javascript (ClojureScript) support. This would allow me more opportunity to use it for real-world projects. This is also why I did not consider alternative FP languages like Haskell and Erlang.

Lastly, the obligatory XKCD cartoon that makes fun of Lisp parentheses (which I address below):

Why Clojure?

Why Clojure? I’ll tell you why… provides a good summary and references:

  1. Simplicity
  2. Java Interoperability
  3. REPL (Read Eval Print Loop)
  4. Macro
  5. Concurrency
  6. ClojureScript
  7. Community

Here are some more reasons from Clojure - the perfect language to expand your brain?

  1. Pragmatism (videos at Clojure TV)
  2. Data processing: Sequences and laziness
  3. Built-in support for concurrency and parallelism

There are also many good articles that describe the benefits of FP (immutable objects, pure functions, etc.).

Clojure (and FP) enthusiasts claim that their productivity is increased because of these attributes. I've seen this stated elsewhere, but from the article above:

Clojure completely changed my perspective on programming. I found myself as a more productive, faster and more motivated developer than I was before.

It also has high praise from high places:

Who doesn't want to hang out with the smart kids?

Clojure development notes

The following are some Clojure development observations based on both the JSON Processor (see below) and a few other small ClojureScript projects I've worked on.

The REPL

The read-evaluate-print-loop, and more specifically the networked REPL (nREPL) and its integration with Emacs clojure-mode/cider is a real game-changer. Java has nothing like it (well, except for the Java 9 JShell, which nobody knows about) and even Ruby's IRB ("interactive ruby") is no match.

Being able to directly evaluate expressions without having to compile and run a debugger is a significant development time-saver. This is a particularly effective tool when you are writing tests.

Parentheses

A lot a people (like XKCD above) make fun of the Lisp parentheses. I think there are two considerations here:

  1. Keeping parentheses matched while editing. In Clojure, this also includes {} and []. Using a good editor is key - see Top 5 IDEs and text editors for Clojure. For me, Emacs smartparens in strict mode (i.e. don't allow mismatches at all), plus some wrap/unwrap and slurp/barf keyboard bindings, all but solved this issue.
  2. When reading Lisp code, I think parentheses get a bad rap. IMO, the confusion has more to do with the difference in basic Lisp flow control syntax than stacked parentheses.  Here's a comparison of Ruby and Lisp  if  syntax:

Once you get used to the differences, code is code. More relevantly, bad code is bad code no matter what language it is. This is important. Here's a good read on the subject: Effective Mental Models for Code and Systems ("the best code is like a good piece of writing").

Project Management

Leiningen ("automating Clojure projects without setting your hair on fire") is essentially like Java's Maven, Ruby's Rake, or Python's pip. It provides dependency management, plug-in support, applications/test runners, customized tasks, etc.

Coming from a Maven background, lein configuration management and usage made perfect sense. The best thing I can say is that it always got the job done, and even more important, it never got in the way!

Getting Answers

I found the documentation (ClojureDocs) to be very good for two reasons:

  1. Every function page has multiple examples, and some are quite extensive.  You typically only need to see one or two good examples to understand how to use a function for your purposes. Having to read the actual function description is rarely needed.
  2. Related functions. The "SEE ALSO" section provides links to functions that can usually improve your code: if  → if-not, if-let, when,... This is very helpful when you're learning a new language.

I lurked around some of the community sites (below). The threads I read were respectful and members seemed eager to help.

Clojure Report Card

Language: A

On the whole, I was very pleased with the development experience. Solving problems with Clojure really didn't seem that much different from other languages. The extensive core language capabilities along with the robust ecosystem of libraries (The Clojure Toolbox) makes Clojure a pleasure to use.

I see a lot of potential for practical uses of Clojure technologies. For example, Clojurified Electron plus reagent-forms allowed me to build a cross-platform Electron desktop form application in just a couple of days.

I was only able to explore the tip of the Clojure iceberg. Based on this initial experience, I'm really looking forward to utilizing more of the language capabilities in the future.

FG vs OOP: B

My expectations for FP did not live up to what I was able to experience in this brief exploration. The lower grade reflects the fact that the Clojure projects I've worked on were not big enough to really take advantage of the benefits of FP described above.

This is a cautionary tale for selecting any technology to solve a problem. Even though you might choose a language/framework that advertises particular benefits (FP in this case), it doesn't necessarily mean that you'll be able to take advantage of those benefits.

This also highlights the silver bullet vs good design mindset mentioned earlier. To be honest, I somehow thought that Clojure/FP would magically solve problems for me. Of course, I was wrong!

I'm sure this grade will improve for future projects!

Macros: INC (incomplete)

I was also not able to fully exercise the use of macros like I wanted to. This was also related to the nature of the projects.  I normally do DSL work with Ruby, but next time I'll be sure to try Clojure instead.

TL;DR

The rest of this article digs a little deeper into the differences between the Ruby and Clojure implementations of the JSON Processor project described below.

At the end of the day, the project includes two implementations of close to identical functionality that can be used for comparison. Both have:

    1. Simple command line parsing and validation
    2. File input/output
    3. Content caching (memoization)
    4. JSON parser and printer
    5. Recursive object (hash/map) traversal

The Ruby version (~57 lines) is about half the size of Clojure (~110 lines). This is a small example, but it does point out that Ruby is a simpler language and that there is some overhead with the Clojure/FP programming style (see Pure Functions, below).

JSON Processor

The best way to learn a new language is to try to do something useful with it. I had a relatively simple Ruby script for processing JSON files. Reproducing its functionality in Clojure was my way of experiencing the Clojure/FP approach.

The project is here:   json-processor

The Ruby version is in the ./ruby, while the Clojure version is in ./src/json_processor. See the README.md file for command line usage.

The processor is designed to simply detect a JSON key that begins with "include" and replace the include key/value pair with the contents of a file's (./<current_path>/value.json) top-level object.  Having this include capability allows reuse of JSON objects and can improve management of large JSON files.

So, if two files exist:

and base.json contains:

And level1.json contains:

After running base.json through the processor, the contents of the level1 object in the level1.json file will replace "include":"level1", with the result being:

Also, included files can contain other include files, so the implementation is a good example of a recursive algorithm.

There are example files in the ./test/resources directory that are slightly more complex and are used for the testing.

Development Environment

Immutability

Here's the Ruby recursive method:

The passed-in object is purposely modified. The Ruby each function is used to iterate over each key/value pair and replace the included content as needed. It deletes the "include" key/value pair and adds the JSON file content in its place. Again, the returned object is a modified version of the object passed to the function. 

The immutability of Clojure objects and use of reduce-kv means that an all key/value pairs need to be added to the 'init' ( m) collection ( (assoc m k v) ). This was not necessary for the Ruby implementation.

A similar comparison, but with more complexity and detailed analysis, can be found here: FP vs. OO List Processing.

Pure Functions

You'll notice in the Ruby code that the class variable @dir_name, which is created in the constructor, is used to create the JSON file path:

get_json_content(File.join(@dir_name,v)).values.first

The Clojure code has no class variables, so base_dir must be passed to all methods, like in the process-json-file  macro:

`(process-json ~base-dir (get-json ~base-dir ~file-name))))

To an OOP developer, having base_dir as a parameter in every function definition may seem redundant and wasteful. The Functional point-of-view is that:

  1. Having mutable data (@dir_name) can be the source of unintended behaviors and bugs.
  2. Pure functions will always produce the same result and have no side effects, no matter what the state of the application is.

These attributes improve reliability and allow more flexibility for future changes. This is one of the promises of FP.

Final Thought

I highly recommend giving Clojure a try!

Bad joke:

I have slurped the Clojure Kool-Aid and can now only spit good things about it.

Sorry about that. 🙂

UPDATE (22-Aug-19): More Clojure love from @unclebobmartin: Why Clojure?

Creating a Minimally Sized Docker Image

Wednesday, September 23rd, 2015

dockerThis is a follow up to the Publishing a Static AngularJS Application with Docker post.

Relative to the size of a standard Ubuntu Docker image I thought the 250MB CoreOS image was "lean". Earlier this month I went to a Docker talk by Brian DeHamer and learned that there are much smaller Linux base images available on DockerHub. In particular, he mentioned Alpine which is only 5MB and includes a package manager.

Here are the instructions for building the same Apache server image from the previous post with Alpine.

The Dockerfile has significant changes:

Explanation of differences:

line 2: The base image is alpine:latest.

lines 4-5: Unlike the CoreOS image, the base Apline image does not include Apache. These lines use the apk package manager to install Apache2 and clean up after.

lines 6-7: Runs the exec form of the Dockerfile ENTRYPOINT command. This will run httpd in the background when the image is started.

line 8: The static web content is copied to a different directory.

Building and pushing the image to DockerHub is the same as before:

Because of the exec additions to the Dockerfile, the command line for starting the Docker image is simpler:

The resulting Docker image is only 10MB as compared to 290MB for the same content and functionality. Nice!

UPDATE (12-Jun-17): Here's an even smaller image: Stuffing Angular into a Tiny Docker Container (< 2 MB)

The Bumpy Road to a New Development Laptop

Saturday, September 6th, 2014

My 6 year old Lenovo T400 finally gave up the ghost. It didn't totally die (it probably never will, thank you IBM), but the screen was starting to flicker and it reliably rebooted itself whenever I was doing something useful. Very annoying.

Grief

I went though the standard 5 stages of grief:

  1. Denial: All T400's do this.
  2. Anger: "Damn it, why does this thing keep crashing? I'm sick of this sh*t!".
  3. Bargaining: Maybe if I update to 14.04 it will stop doing this.
  4. Depression: "This sucks!"
  5. Acceptance: OK, time to buy a new laptop.

I'm fine now, but that was a rough 30 minutes!

Decision Process

I'm not going to detail all of my system requirements or decision making process, but here's a high level outline:

  • I primarily need a Ubuntu development machine. My T400 is a dual boot 12.04/XP.  In recent years I've rarely used Windows, but there are some tools that are nice to have around (e.g. Visual Studio).
  • I looked hard at the MacBook Pro but at the end of the day I just couldn't bring myself to go that route. Besides the higher hardware cost/performance ratio re: the alternatives, I guess I'm just not a Mac person.
  • I really wanted to get an Ultrabook form factor. Not only for the portability, but I'm not ashamed to say that the 'cool factor' played a part in the decision.
  • I looked at all of the standard Ultrabook offerings: Lenovo, ASUS, Dell, System76, Acer, etc. No touch, no 'convertible' (if you need a tablet, buy a tablet), no Windows 8. The deciding factor for me was reliability. Besides the T400, I have a T60 in the closet that still runs fine.
  • So Lenovo it is. The history of the X1 Carbon (see The 2014 Lenovo X1 Carbon: Lenovo Giveth, And Lenovo Taketh Away and Lenovo ThinkPad X1 Carbon 2014 review)  goes back to 2011. The latest version (2014, or Carbon 2) has taken a lot of heat over the keyboard, function keys, and trackpad changes. I'm sure these opinions have merit, but I just want a fast machine that works!

Buying Experience (not good!)

Beware of the Lenovo Outlet. I purchased a 'ThinkPad X1 Carbon 2 - New':

x1-carbon-2-new

Here's the condition definition (my highlight):

Products that are discontinued, overstocked, or returned unopened. These items are in their original factory sealed packaging and have never been used or opened.

Boy was I disappointed when the package arrived! First, the only thing in the box was the laptop. No AC power adapter, no docs, no nothing. To my amazement, the machine was in suspend mode. When I opened the lid it came out of hibernation to a Win7 user password prompt! I didn't even try to guess a password. I couldn't believe it!

The machine was in pretty good shape physically, a little dirty and missing a foot pad, but no dents or scratches. Certainly opened and used!  At least the BIOS confirmed that I got the correct hardware (i7, 8G RAM, 256G SSD).

After many calls to multiple Lenovo service centers I got nowhere. No return, no exchange. Maybe I should write a letter to The Haggler, but even then I probably wouldn't return the machine anyway. I got a great price (much better than what I could find on eBay) and the Lenovo Outlet no longer has any i7 X1 Carbon's listed.  Also, I'm a techie so disk partitioning and re-installed OS's is not a problem.

I'm thinking now that Lenovo might have screwed up a repair shipment and I ended up wiping some poor schmuck's SSD. Oh well.

Anyway, as unpleasant as this was, I now have a development laptop that should meet my needs for many years to come.

Installation Notes

  • Dual boot. Here's the right way: WindowsDualBoot, but because I installed Ubuntu first (mistake) here's what I did:
    1. Used GParted  to partition the disk to my liking. Don't forget to add a Linux swap partition (8G for me). The Ubuntu installer will complain if it's not there and find it automatically if it is.
    2. Created a Ubuntu 14.04 bootable USB stick: How to create a bootable USB stick on Windows. Install Ubuntu on the ext4 partition.
    3. Created a bootable Windows 7 USB stick. The Universal USB Installer above works fine for this. Install Windows 7 on the Windows partition.
    4. After Step #3 the system will only boot Windows. Use Boot-Repair (option #2) to re-install GRUB.
  • Ubuntu 14.04 seems to work flawlessly on the X1. There were only two hardware compatibility issues that I read about:
    1. Not waking up from suspend mode. This is resolved by updating the BIOS firmware.  Upgrading from v1.09 to v1.15 fixed it for me. The Lenovo firmware only comes as a CD image (.iso) or a Windows update application. Because the X1 does not have a CDROM drive the only reasonable way to upgrade is via Windows. People have upgraded the firmware via USB (see BIOS Upgrade/X Series), but it's really ugly.
    2. Fingerprint reader. Haven't tried to use it, and probably won't.

Happy Ending (I hope)

Like most things in life, nothing is ever perfect. This experience was no exception.

I have a JRuby/Rails project with some Rspec tests that take 80 seconds to complete on the T400 and 20 seconds on the X1. I can live with that improvement. 🙂

Hopefully the X1 will last as long the T400 did.

Guest Article: Static Analysis in Medical Device Software (Part 3) — Formal specifications

Sunday, March 7th, 2010

Pascal Cuoq at Frama-C continues his discussion of static analysis for medical device software. Also see Part 1 and Part 2.

In May 2009, I alluded to a three-part blog post on the general topic of static analysis in medical device software. The ideas I hope will emerge from this third and last part are:

  1. Formal specifications are good,
  2. Partial formal specifications are underrated, and
  3. One should never commit in advance to writing anything, however easy it seems it will be at the time.

Going back to point one, a "functional specification" is a description of what a system/subsystem does. I really mostly intend to talk about formal versions of functional specifications. This only means that the description is written in a machine-parsable language with an unambiguous grammar. The separation between formal and informal specifications is not always clear-cut, but this third blog entry will try to convince you of the advantages of specifications that can be handled mechanically.

Near the bottom of the V development cycle, "subsystem" often means software: a function, or a small tree of functions. A functional specification is a description of what a function (respectively, a tree of functions) does and does not (the time they take to execute, for instance, is usually not considered part of the functional specification, although whether they terminate at all can belong in it. It is only a matter of convention). The Wikipedia page on "Design by Contract" lists the following as making up a function contract, and while the term is loaded (it may evoke Eiffel or run-time assertion checking, which are not specifically the topic here), the three bullet points below are a good categorization of what functional specifications are about:

  • What does the function expect, what rules should the caller obey at the time of calling it?
  • What does the function guarantee, what is the caller allowed to expect from the function's results?
  • What properties does the function maintain?

I am tempted to point out that invariants maintained by a function can be encoded in terms of things that the function expects and things that the function guarantees, but this is exactly the kind of hair-splitting that I am resolved to give up on.

The English sentence "when called, this function may modify the global variables G and H, and no other" is almost unambiguous and rigorous — assuming that we leave aliasing out of the picture for the moment. Note that while technically something that the function ensures on return (it ensures that for any variable other than G or H, the value of the variable after the call is the same as its value before the call), this property can be thought of more intuitively as something that the function maintains.

The enthusiastic specifier may like the sentence "this function may modify the global variables G and H, and no other" so much that he may start copy-pasting the boilerplate part from one function to another. Why should he take the risk to introduce an ambiguity accidentally? Re-writing from memory may lead him to forget the "may" auxiliary, when he does not intend to guarantee that the function will overwrite G and H each time it is called. Like for contracts of a more legal nature, copy-pasting is the way to go. The boilerplate may also include jargon that make it impossible to understand by someone who is not from the field, or even from the company, whence the specifications originate. Ordinary words may be used with a precise domain-specific meaning. All reasons not to try to paraphrase and to reuse the specification template verbatim.

The hypothetical specifier may particularly appreciate that the specification above is not only carefully worded but also that a list of possibly modified globals is part of any wholesome function specification. He may — rightly, in my humble opinion — endeavor to use it for all the functions he has to specify near the end of the descending branch of the V cycle. This is when he is ripe for the introduction of a formal syntax for functional specifications. According to Wikipedia, Robert Recorde introduced the equal sign "to auoide the tediouſe repetition of [...] woordes", and the sentence above is a tedious repetition begging for a sign of its own to replace it. When the constructs of the formal language are well-chosen, the readability is improved, instead of being diminished.

A natural idea for to express the properties that make up a function contract is to use the same language as for the implementation. Being a programming language, it is suitably formal; the specifier, even if he is not the programmer, is presumably already familiar with it; and the compiler can transform these properties into executable code that checks that preconditions are properly assured by callers, and that the function does its own part by returning results that satisfy its postconditions. This choice can be recognized in run-time assertion checking, and in Test-Driven Development (in Test-Driven Development, unit tests and expected results are written before the function's implementation and are considered part of the specification).

Still, the choice of the programming language as the specification language has the disadvantages of its advantages: it is a programming language; its constructs are optimized for translation to executable code, with intent of describing algorithms. For instance, the "no global variable other than G and H is modified" idiom, as expressed in C, is a horrible way to specify a C function. Surely even the most rabid TDD proponent would not suggest it for a function that belongs in the same C file as a thousand global variable definitions.

A dedicated specification language has the freedom to offer exactly the constructs that make it pleasant to write specifications in it. This means directly including constructs for commonly recurring properties, but also providing the building blocks that make it possible to define new constructs for advanced specifications. So a good specification language has much in common with a programming language.

A dedicated specification language may for instance offer

as a synonym for the boilerplate function specification above, and while this syntax may seem wordy and redundant to the seat-of-the-pants programmer, I hope to have convinced you that in the context of structured development, it fares well in the comparison with the alternatives. Functional specifications give static analyzers that understand them something to chew on, instead of having to limit themselves to the absence of run-time errors. This especially applies to correct static analyzers as emphasized in part 2 of this oversize blog post.

Third parties that contact us often are focused on using static analysis tools to do things they weren't doing before. It is a natural expectation that a new tool will allow you to do something new, but a more accurate description of our activity is that we aim to allow to do the same specification and verification that people are already doing (for critical systems), better. In particular, people who discover tools such as Frama-C/Jessie or other analysis tools based on Hoare-Floyd precondition computations often think these tools are intended for verifying, and can only be useful for verifying, complete specifications.

A complete specification for a function is a specification where all the properties expected for the function's behavior have been expressed formally as a contract. In some cases, there is only one function (in the mathematical sense) that satisfies the complete specification. This does not prevent several implementations to exist for this unique mathematical function. More importantly, it is nice to be able to check that the C function being verified is one of them!

Complete specifications can be long and tedious to write. In the same way that a snippet of code can be shorter than the explanation of what it does and why it works, a complete specification can sometimes be longer than its implementation. And it is often pointed out that these specifications can be so large that once written, it would be too difficult to convince oneself that they do not themselves contain a bug.

But just because we are providing a language that would allow you to write complete specifications does not mean that you have to. It is perfectly fine to write minimal formal specifications accompanied with informal descriptions. To be useful, the tools we are proposing only need to do better than testing (the most widely used verification technique at this level). Informal specifications traditionally used as the basis for tests are not complete either. And there may be bugs in both the informal specification or in its translation into test cases.

If anything, the current informal specifications leave out more details and contain more bugs, because they are not machine-checked in any way. The static analyzer can help find bugs in a specification in the same way that a good compiler's sanity checks and warnings help avoid the stupidest bugs in a program.

In particular, because they are written in a dedicated specification language, formal specifications have better composition properties than, say, C functions. A bug in the specification of one function is usually impossible to overlook when trying to use said specification in the verification of the function's callers. Taking an example from the tutorial/library (authored by our colleagues at the applied research institute Fraunhofer FIRST) ACSL by example, the specification of the max_element function is quite long and a bug in this specification may be hard to detect. The function max_element finds the index of the maximum element in an array of integers. The formal version of this specification is complicated by the fact that it specifies that if there are several maximum elements, the function returns the first one.

Next in the document a function max_seq for returning the value of the maximum element in an array is defined. The implementation is straightforward:

The verification of max_seq builds on the specification for max_element. This provides additional confidence: the fact that max_seq was verified successfully makes a bug in the specification of max_element unlikely. Not only that, but if the (simpler, easier to trust) specification for max_seq were the intended high-level property to verify, it wouldn't matter that the low-level specification for max_element was not exactly what the specifier intended (say, if there was an error in the specification of the index returned in case of ties): the complete system still has no choice but to behave as intended in the high-level specification. Unlike a compiler that lets you put together functions with incompatible expectations, the proof system always ensures that the contract used at the call point is the same as the contract that the called function was proved to satisfy.

And this concludes what I have to say on the subject of software verification. The first two parts were rather specific to C, and would only apply to embedded code in medical devices. This last part is more generic — in fact, it is easier to adapt the idea of functional specifications for static verification to high-level languages such as C# or Java than to C. Microsoft is pushing for the adoption of its own proposal in this respect, Code Contracts. Tools are provided for the static verification of these contracts in the premium variants of Visual Studio 2010. And this is a good time to link to this video. Functional specifications are a high-level and versatile tool, and can help with the informational aspects of medical software as well as for the embedded side of things. I would like to thank again my host Robert Nadler, my colleague Virgile Prevosto and department supervisor Fabrice Derepas for their remarks, and twitter user rgrig for the video link.

The Challenges of Developing Software for Medical Devices

Monday, February 8th, 2010

Developing Software for Medical Devices – Interview with SterlingTech gives a good overview of the challenges that especially face young medical device companies. In particular (my emphasis):

Make sure that your company has a good solid Quality System as it applies to software development. Do not put a Quality System into place that you can not follow. This is the cause of most audit problems.

I couldn't have said it better myself, though I think that focusing on the FDA may distract you from why you're creating software quality processes in the first place. The real purpose of having software design controls is to produce a high quality, user friendly, robust, and reliable system that meets the intended use of the device.  If your quality system does that, you won't have to worry about FDA audits.

Since Klocwork is a static analysis tool company I also want to point out a recent related article that's worth reading -- and trying to fully understand:

A Few Billion Lines of Code Later: Using Static Analysis to Find Bugs in the Real World

Note the user comment by Bjarne Stroustrup.

UPDATE (2/9/10): Here's another good code analysis article:

A Formal Methods-based verification approach to medical device software analysis

2009 Ultimate Developer and Power Users Tool List for Windows

Wednesday, September 2nd, 2009

This is a great list. I linked to it in 2007, but somehow forget in 2008. Anyway, there's probably at least one tool you may not have seen before that would be worth trying out.

Scott Hanselman's 2009 Ultimate Developer and Power Users Tool List for Windows

Oh, and read through the comments. Everyone has their own favorites, and opinions.

Top 10 Concepts That Every Software Engineer Should Know

Tuesday, July 29th, 2008

Check out Top 10 Concepts That Every Software Engineer Should Know. The key point here is concepts. These are (arguably) part of the foundation that all good software engineers should have:

  1. Interfaces
  2. Conventions and Templates
  3. Layering
  4. Algorithmic Complexity
  5. Hashing
  6. Caching
  7. Concurrency
  8. Cloud Computing
  9. Security
  10. Relational Databases

From a practical point of view, this still comes down to a Selecting Books About Programming issue. This list is just more focused on specific software technologies and techniques.

So many books, so little time...

UPDATE (7/30/08):

Here's a career related post with some good advice: Becoming a Better Developer.  Learn a New Technology Each Month (#5) seems like a little much. I guess it depends on what your definition of "learn" is.

One Reason Why Linux Isn’t Mainstream: ./configure and make

Sunday, June 22nd, 2008

Bare with me, I'll get to the point of the title by the end of the post.

I primarily develop for Microsoft platform targets, so I have a lot of familiarity with Microsoft development tools and operating systems. I also work with Linux-based systems, but mostly on the systems administration side: maintaining servers for R&D tools like Trac and Subversion.

I recently had some interest in trying to use Mono running on Linux as .NET development platform.

This also allowed me to try Microsoft Virtual PC 2007 (SP1) on XP-SP3. I went to a local .NET Developer's Group (here) meeting a couple of weeks ago on Virtual PC technology. Being a Microsoft group most of the discussion was on running Microsoft OS's, but I saw the potential for using VPC running Linux for cross-platform development. My PC is an older Pentium D dual core without virtualization support, but it has 3Gig of RAM and plenty of hard disk space, so I thought I'd give it a try.

Download and installation of Ubuntu 8.04 (Hardy Heron) LTS Desktop on VPC-2007 is a little quirky, but there are many blog posts that detail what it takes to create a stable system: e.g. Installing Ubuntu 8.04 under Microsoft Virtual PC 2007. Other system optimizations and fixes are easily found, particularly on the Ubuntu Forums.

OK, so now I have a fresh Linux installation and my goal is to install a Mono development environment. I started off by following the instructions in the Ubuntu section from the Mono Other Downloads page. The base Ubuntu Mono installation does not include development tools. From some reading I found that I also had to install the compilers:

# apt-get install mono-mcs
# apt-get install mono-gmcs

So now I move on to MonoDevelop. Here's what the download page looks like:

Monodevelop Download

Here's my first gripe: Why do I have to download and install four other dependencies (not including the Mono compiler dependency that's not even mentioned here)?

Second gripe: All of the packages require unpacking, going to a shell prompt, changing to the unpack directory, and running the dreaded:

./configure
make

Also notice the line: "Compiling the following order will yield the most favorable response." What does that mean?

So I download Mono.Addins 0.3, unpack it, and run ./configure. Here's what I get:

configure: error: No al tool found. You need to install either the mono or .Net SDK.

This is as far as I've gotten. I'm sure there's a solution for this. I know I either forgot to install something or made a stupid error somewhere along the way. Until I spend the time searching through forums and blogs to figure it out, I'm dead in the water.

I'm not trying to single out the Mono project here. If you've even tried to install a Unix application or library you inevitably end up in dependency hell -- having to install a series of other packages that in turn require some other dependency to be installed.

So, to the point of this post: There's a lot of talk about why Linux, which is free, isn't more widely adopted on the desktop. Ubuntu is a great product -- the UI is intuitive, system administration isn't any worse than Windows, and all the productivity tools you need are available.

In my opinion, one reason is ./configure and make. If the open source community wants more developers for creating innovative software applications that will attract a wider user base, these have to go. I'm sure that the experience I've described here has turned away many developers.

Microsoft has their problems, but they have the distinct advantage of being able to provide a complete set of proprietary software along with excellent development tools (Visual Studio with ReSharper is hard to beat). Install them, and you're creating and deploying applications instantly.

The first step to improving Linux adoption has to be making it easier for developers to simply get started.

2007 Ultimate Developer And Power Users Tool List For Windows

Tuesday, August 28th, 2007

If you do any Windows development work and haven't seen it yet, this list is a must have:

Scott Hanselman's 2007 Ultimate Developer And Power Users Tool List For Windows