Archive for the ‘Programming’ Category

What makes a good programmer? Acceptance of meaningless rules and conclusions!

Friday, December 12th, 2008

Boing Boing has this great post: Comfort with meaninglessness the key to good programmers.

Research in the UK on teaching computer science found three groups of students:

  1. People who answered the questions using different mental models for different questions.
  2. People who answered using a consistent model.
  3. People who didn't answer the questions at all.

Contrary to predictions that the more adaptive group (#1) would fair better, they found that the consistent group (#2) were more successful programmers.

Even better:

... single biggest predictor of likely aptitude for programming is a deep comfort with meaninglessness

Now I know why I enjoy programming so much.

Social science research is so much fun! As you might have expected,  xkcd has the perfect strip for this (hat tip: comment #6):

the_difference

Stack Overflow Launches: A Programmer’s Resource and Community

Monday, September 15th, 2008

A new programming site,  Stack Overflow, officially launched (public beta) today. Started by Jeff Atwood and Joel Spoksky, the intent is to provide a forum for getting answers to your programming questions.

From the About:

Stack Overflow is by programmers, for programmers, with the ultimate intent of collectively increasing the sum total of programming knowledge in the world. No matter what programming language you use, or what operating system you call home -- better programming is our goal.

Stack Overflow is that tiny asterisk in the middle, there.

There's also a FAQ and a blog.

I've been taking part in the SO private beta testing for about a month. I've asked one question related to an issue I was working on and tried to answer a few questions. My reputation is probably a good reflection of my minimal activity level.  From that experience though, I can tell you that it takes a significant amount of effort to get your reputation even into the hundreds.

I'm a long time Code Project user so my initial impression of SO is greatly influenced by my on-going CP experience.  Because of that, and for better or worse, comparisons are unavoidable.

Sites like SO provide two basic services:

  1. Getting answers to technical questions.
  2. Providing a gathering place for like-minded individuals.

Getting Answers

Currently there's no comparison. SO is in it's infancy. I have yet to get a hit on a search term in SO that returns pages in CP. This is no surprise, as it will take a long time for SO to build a critical mass of technical content.

One concern about the SO question/answer-only format versus CP is that CP has both forums and an extensive article base.  For me anyway, the most useful CP search hits usually come from the code posted in the articles.

Forums might have relevant links, but are less likely to have code snippets that will end up with useful search results. I think SO needs to figure out a way to encourage the posting or downloading of searchable code. There's no substitute for the blood and guts of programming: code!

IMHO, the missing set in the venn diagram is an article submission component.

Community

There are hundreds of programming forums, link sites, and blogs.  What's unique about the contributions of these groups of individuals is their self-forming community -- they are the very definition of a social network.

A recent NYT op-ed article by David Brooks called The Social Animal discusses Republican Party doctrine development and how it:

... underestimates the importance of connections, relationships, institutions and social filaments that organize personal choices and make individuals what they are.

It struck me that these basic psychological and sociological tenants are exactly the same motivators that create and maintain on-line social networks. SO is no different.  Social status (reputation) is an important motivator, and how SO manages these interactions is a much discussed topic (e.g. see Fastest Gun in the West Problem).

CP has somehow been able to keep a loyal group of contributors over the years -- Article Competitions, Surveys, The Lounge, Hall Of Shame -- I'm not sure what keeps them coming back, but they do.

SO must continue to build and nurture that sense of community in order to ensure long term involvement of contributors and to attract new members.

Off and Running

SO is off to a fast start because of the popularity that Jeff and Joel have brought to the launch. The site is visually clean, efficient, and easy to use.  It appears that (early on anyway) a majority of the questions are Microsoft technology related.  A broader range of topics are sure to appear now that the site is opened to the public.

I'm looking forward to watching Stack Overflow evolve, and come up in my search results.

UPDATE (9/16/08):

Here's a sampling of other launch posts (all open in another window/tab):

Stack Overflow: Not Convinced
Stack Overflow Launches
Stack Overflow: Solutions for Coders
Stack Overflow
Stackoverflow.com
Bad First Impression
StackOverflow - CrackOverflow or StackOverblown?

The Dirty Words of Software Development

Sunday, August 24th, 2008

A post today by Jeremy Miller has a (late) Tribute to George Carlin which lists words and phrases that should not be used when discussing software development. I don't think the list was meant to be comprehensive but it's a good read anyway.  One of them does a good job of backing up my contention that analogies are evil:

“Software as Construction” – ... I feel perfectly qualified to say that the “Software as Construction” analogy is an extremely poor fit.

It's good to see that Mort is also on the list, and the first one too! That one really struck a nerve for me when I first heard it last year, so it should be at the top.

I'm not so sure about the "refactoring" items.  Bad design and refactoring are two different things. Code duplication is bad design and should never be tolerated. But adding unneeded (and worse, untested) complexity for the sake of generality alone isn't a smart use of resources either. I think the term "refactoring" has essentially become synonymous with "rewriting" anyway -- nobody's fooled by that jargon any more.

Visualizing Spaghetti Code

Tuesday, August 19th, 2008

Is a picture of a computer program really worth a 1000 words? HP seems to think so.

Making Sense of Spaghetti Code
discusses the visual representation of source code as a marketing tool for their consulting services.  Being from California, the budget cutting complaint:

because there aren’t enough programmers who know the Cobol language used in the state’s payroll software

is pretty scary. The urban (programming) myth that there are still more lines of Cobol in use than any other language may actually be true. Scarier still!

HP's Legacy Application Transformation and Visual Intelligence Tools were reported in April: SOA picture worth 1,000 words for HP.

NDepend also has visual representations of code structure and dependencies that makes pretty pictures of .NET code:

Top 10 Concepts That Every Software Engineer Should Know

Tuesday, July 29th, 2008

Check out Top 10 Concepts That Every Software Engineer Should Know. The key point here is concepts. These are (arguably) part of the foundation that all good software engineers should have:

  1. Interfaces
  2. Conventions and Templates
  3. Layering
  4. Algorithmic Complexity
  5. Hashing
  6. Caching
  7. Concurrency
  8. Cloud Computing
  9. Security
  10. Relational Databases

From a practical point of view, this still comes down to a Selecting Books About Programming issue. This list is just more focused on specific software technologies and techniques.

So many books, so little time...

UPDATE (7/30/08):

Here's a career related post with some good advice: Becoming a Better Developer.  Learn a New Technology Each Month (#5) seems like a little much. I guess it depends on what your definition of "learn" is.

Loading Individual Designer Default Values into Visual Studio .NET Settings

Wednesday, July 23rd, 2008

The VS.NET Settings designer creates an ApplicationSettingsBase Settings class in Settings.Designer.cs (and optionally Settings.cs).  The default values from the designer are saved in app.config and are loaded into the Settings.Default singleton at runtime.

So, now you have a button on a properties page that says 'Reset to Factory Defaults' where you want to reload the designer default values back into your properties.  If you want do this for all property values you can just use Settings.Default.Reset(). But what if you only want to reset a subset of your properties? 

There may be a better way to do this, but I couldn't find one.  The following code does the job and will hopefully save someone from having to reinvent this wheel.

The ResetToFactoryDefaults method takes a collection of SettingsPropertys and uses the DefaultValue string to reset the value. Most value types (string, int, bool, etc.) worked with the TypeConverter, but the StringCollection class is not supported so the XML string has to be deserialized manually.

These helper methods show how just selected (and all) properties can be reset.

This code was developed with VS 2005, but should also work in VS 2008.

Interoperability: Google Protocol Buffers vs. XML

Monday, July 14th, 2008

Google recently open sourced Protocol Buffers: Google's Data Interchange Format (documentation, code download). What are Protocol Buffers?

Protocol buffers are a flexible, efficient, automated mechanism for serializing structured data – think XML, but smaller, faster, and simpler.

The documentation is complete and worth a quick read through. A complete analysis of PB vs. XML can be found here:  So You Say You Want to Kill XML.....

As discussed, one of the biggest drawbacks for us .NET developers is that there is no support for the  .NET platform. That aside, all of the issues examined are at the crux of why interoperability is so difficult. Here are some key points from the Neward post:

  1. The advantage to the XML approach, of course, is that it provides a degree of flexibility; the advantage of the Protocol Buffer approach is that the code to produce and consume the elements can be much simpler, and therefore, faster.
  2. The Protocol Buffer scheme assumes working with a stream-based (which usually means file-based) storage style for when Protocol Buffers are used as a storage mechanism. ... This gets us into the long and involved discussion around object databases.
  3. Anything that relies on a shared definition file that is used for code-generation purposes, what I often call The Myth of the One True Schema. Assuming a developer creates a working .proto/.idl/.wsdl definition, and two companies agree on it, what happens when one side wants to evolve or change that definition? Who gets to decide the evolutionary progress of that file?

Anyone that has considered using a "standard" has had to grapple with these types of issues. All standards gain their generality by having to trade-off for something (speed, size, etc.). This is why most developers choose to build proprietary systems that meet their specific internal needs. For internal purposes, there's generally not a need to compromise. PB is a good example of this.

This also seems to be true in the medical device industry.  Within our product architectures we build components to best meet our customer requirements without regard for the outside world. Interfacing with others (interoperability) is generally a completely separate task, if not a product unto itself.

Interoperability is about creating standards which means having to compromise and make trade-offs.  It would be nice if Healthcare interoperability could be just a technical discussion like the PB vs. XML debate. This would allow better integration of standards directly into products so that there would be less of the current split-personality (internal vs. external  needs) development mentality.

Another thing I noticed about the PB announcement -- how quickly it was held up to XML as a competing standard. With Google's clout, simply giving it away creates a de facto standard. Within the medical connectivity world though, there is no Google.

I've talked about this before, but I'm going to say it again anyway. From my medical device perspective, with so many confusing standards and competing implementations the decision on what to use ends up not being based on technical issues at all. It's all about picking the right N partners for your market of interest, which translates into N (or more) interface implementations. This isn't just wasteful, it's also wrong. Unfortunately, I don't see a solution to this situation coming in the near future.

One Reason Why Linux Isn’t Mainstream: ./configure and make

Sunday, June 22nd, 2008

Bare with me, I'll get to the point of the title by the end of the post.

I primarily develop for Microsoft platform targets, so I have a lot of familiarity with Microsoft development tools and operating systems. I also work with Linux-based systems, but mostly on the systems administration side: maintaining servers for R&D tools like Trac and Subversion.

I recently had some interest in trying to use Mono running on Linux as .NET development platform.

This also allowed me to try Microsoft Virtual PC 2007 (SP1) on XP-SP3. I went to a local .NET Developer's Group (here) meeting a couple of weeks ago on Virtual PC technology. Being a Microsoft group most of the discussion was on running Microsoft OS's, but I saw the potential for using VPC running Linux for cross-platform development. My PC is an older Pentium D dual core without virtualization support, but it has 3Gig of RAM and plenty of hard disk space, so I thought I'd give it a try.

Download and installation of Ubuntu 8.04 (Hardy Heron) LTS Desktop on VPC-2007 is a little quirky, but there are many blog posts that detail what it takes to create a stable system: e.g. Installing Ubuntu 8.04 under Microsoft Virtual PC 2007. Other system optimizations and fixes are easily found, particularly on the Ubuntu Forums.

OK, so now I have a fresh Linux installation and my goal is to install a Mono development environment. I started off by following the instructions in the Ubuntu section from the Mono Other Downloads page. The base Ubuntu Mono installation does not include development tools. From some reading I found that I also had to install the compilers:

# apt-get install mono-mcs
# apt-get install mono-gmcs

So now I move on to MonoDevelop. Here's what the download page looks like:

Monodevelop Download

Here's my first gripe: Why do I have to download and install four other dependencies (not including the Mono compiler dependency that's not even mentioned here)?

Second gripe: All of the packages require unpacking, going to a shell prompt, changing to the unpack directory, and running the dreaded:

./configure
make

Also notice the line: "Compiling the following order will yield the most favorable response." What does that mean?

So I download Mono.Addins 0.3, unpack it, and run ./configure. Here's what I get:

configure: error: No al tool found. You need to install either the mono or .Net SDK.

This is as far as I've gotten. I'm sure there's a solution for this. I know I either forgot to install something or made a stupid error somewhere along the way. Until I spend the time searching through forums and blogs to figure it out, I'm dead in the water.

I'm not trying to single out the Mono project here. If you've even tried to install a Unix application or library you inevitably end up in dependency hell -- having to install a series of other packages that in turn require some other dependency to be installed.

So, to the point of this post: There's a lot of talk about why Linux, which is free, isn't more widely adopted on the desktop. Ubuntu is a great product -- the UI is intuitive, system administration isn't any worse than Windows, and all the productivity tools you need are available.

In my opinion, one reason is ./configure and make. If the open source community wants more developers for creating innovative software applications that will attract a wider user base, these have to go. I'm sure that the experience I've described here has turned away many developers.

Microsoft has their problems, but they have the distinct advantage of being able to provide a complete set of proprietary software along with excellent development tools (Visual Studio with ReSharper is hard to beat). Install them, and you're creating and deploying applications instantly.

The first step to improving Linux adoption has to be making it easier for developers to simply get started.

.NET Console.Writeline Performance Issues

Wednesday, May 28th, 2008

We ran into an interesting problem recently that I have not been able to find documented anywhere.

We're doing real-time USB data acquisition with .NET 2.0. The data bandwidth and processing isn't overwhelming. Specifically, we expect data packets at 50 Hz -- every 20 ms. Yet we were having horrible delay problems. In the end we found that Console.Writeline was the culprit!

To verify this a test program was written to measure the throughput of the following loop:

The length of mstr is varied and this loop is run for about 30 seconds. The results show the ms per message for increasing message lengths:

Console.Writeline Performance

Console.Writeline is surprisingly slow!

We use log4net for our logging. With the timestamp and class information, a log message is typically greater than 100 characters. A single log message introduces at least a 20 ms delay in that case, with additional messages adding that much more. Even though debug logging would not be included in the released version, these significant delays make development difficult.

Not only do you need to make sure that there are no Console.Writeline's in your real-time threads you also need to remove the console appender (<appender-ref ref="ConsoleAppender"/>) from the log4net configuration. The log4net file appenders do not cause noticeable delays.

Selecting Books About Programming

Monday, May 26th, 2008

This is tough to do. There are tons of technical books out there. Also, now that the Internet can instantly answer just about any question, the path of least resistance leads to arguments like this: Why I don’t read books.

There is no right or wrong when in comes to learning methods. It's a personal preference. I'm a book reader, but I can understand how Internet content (blogs, articles, etc.) has made it easy for people that don't like books (for whatever reason) to acquire relevant knowledge.

The availability of "Best" lists are abundant. For example:

OK, so once you get past the classics (Code Complete 2, The Pragmatic Programmer, Design Patterns, etc.) where do you go from there?

I will typically invest in books on emerging technologies that I want to fully understand (a recent one was Windows Presentation Foundation Unleashed -- highly recommended). The real challenge is finding a book that doesn't suck. Reviews and recommendations by other readers, like from the sites above, are the best resources.

My other vetting technique is standing (and reading) for long periods of time in the nerd-book section of a bookstore. There's no substitute for browsing the pages of a real book. The one that's still in your hand when your SO drags you towards the exit is probably the best one to buy.