Archive for the ‘.NET’ Category

Microsoft BUILD Conference

Friday, September 16th, 2011

Wow -- talk about drinking from a fire hose. BUILD Conference news and opinions are everywhere.

Fun stuff. It's going to take a while to digest all of this.

Binary Waveform Data in SQL Server 2008

Tuesday, March 8th, 2011

As Shahid points out in Consider MySQL ‘Archive’ storage engine to store large amounts of med device structured or waveform data, saving physiologic waveform data from a medical device in a MySQL database for archive purposes is a reasonable alternative to using flat files.

In SQL Server 2008 you can have it both ways.  In addition to saving binary data directly in the database you have the option to have a varbinary column stored as a file stream. From the article How to store and fetch binary data into a file stream column:

File stream data can be used from the .NET Framework using the traditional SqlParameter, but there is also a specialized class called SqlFileStream which can be used with .NET Framework 3.5 SP1 or later. This class provides mechanisms, for example, for seeking a specific position from the data.

There are pros and cons to this approach. The backup and transactional issues, along with the performance considerations, all have to be evaluated against your specific system requirements.  Having the SQL Server engine manage the database relationship to the binary files seems like a big advantage over maintaining flat files yourself.

Read the MSDN article FILESTREAM Storage in SQL Server 2008 for all the gory details.

UPDATE (3/25/11): Who's Got Access to your FileStream Directories?

A .NET Application that Never Dies

Saturday, September 12th, 2009

29392-Live_forever_-sfullJeremy's Graceful Shutdown Braindump should really include another use case. How do you create a .NET application that never shuts down? Ever!

This  is a common scenario for closed systems that only allow the user to interact with a predefined set of applications.  In other words, the user is never able to utilize any of the operating system functionality. In particular, they can not install new applications or update any software components.

This situation is related to the issues discussed in Medical Device Software on Shared Computers. Creating a closed Windows-based system is not an easy task. For our XP Embedded system here are some of the considerations:

  1. Prevent booting from a peripheral device (CD-ROM, USB stick, etc.)
  2. Prevent access to the BIOS so that #1 is enforced.
  3. Prevent plug-n-play devices from auto-starting installers.
  4. You can not run Explorer as the start-up shell -- no desktop or start menu.
  5. Prevent Ctrl-Alt-Del from activating task manager options.
  6. Disable the Alt-Tab selection window so the user can not switch application focus.
  7. Ensure that the primary user interface application is always running.
  8. All UI components must exit without user interaction when the system is powered down.

One of the challenges for .NET applications is how to handle unexpected exceptions. What you need first is a way to catch all exceptions.  OK, so now you know your program is in serious distress. You may be able to recover some work (a la a "graceful shutdown"), but after that it's not a good idea keep the application running.

That means you have to restart the program. For a WinForm application one option is:

Application.Restart() essentially calls Application.Exit() which tries to gracefully shutdown all UI threads.   The problem with that is the application may appear to be hung if you have background worker threads that are monitoring hardware devices that are not currently responding.

Another issue is when the .NET application is doing interop with COM components.  I've seen situations where all of the managed threads appear to exit properly via Application.Exit() but an un-managed exception (and error window) still occur.  This behavior is unacceptable.

The way to ensure that the application restarts properly (simplified):

The Environment.Exit() call is harsh, but it is the only way I know of that guarantees that the application really exits.  If you want a Windows Application event log and a dump of your application you can use Environment.FailFast() instead.

UPDATE (9/19/09): I ran across a post about COM object memory allocation in mixed managed/unmanaged environments: Getting IUnknown from __ComObject. As this article exemplifies, debugging COM objects under these circumstances is a real pain in the butt.  We used strongly-typed managed wrappers for our COM objects. Besides a .NET  memory profiler we just monitored overall allocations externally with Process Explorer. It may be undocumented and fragile, but at least it's good to know that there is way to dig deeper if you need to.

Exploring Cloud Computing Development

Saturday, February 7th, 2009

Cloud ComputingIt's not easy getting your arms around this one. The term Cloud Computing has become a catch-all for a number of related technologies that have been used in enterprise-class systems for many years (e.g. grid computing, SOA, virtualization, etc.).

One of the primary concerns of cloud computing in Healthcare IT is privacy and security.  A majority of the content and comments in just about every article or blog post about CC, re: health data or not, deal with these concerns. I'm going to save that discussion for a future post.

I'm also not going to dig into the multitude of business and technical trade-offs of  these "cloud" options versus more traditional SaaS and other hybrid server approaches.  People write books about this stuff and there's a flood of Internet content that slice and dice these subjects to death.

My purpose here is to provide an overview of cloud computing from a developers point-of-view so we can begin to understand what it would take to implement custom software in the cloud.  All of the major technical aspects are well covered elsewhere and I'm not going to repeat them here. I'm just going to note the things that I think were important to take into consideration when looking at each option.

Here's a simplified definition of Cloud Computing that's easy to understand and will get us started:

Cloud computing is using the internet to access someone else's software running on someone else's hardware in someone else's data center while paying only for what you use.

As a consumer, for example of a social networking site or PHR lets say, this definition fits pretty well.  There's even an EMR that is  implemented in the cloud, Practice Fusion, that would fit this definition.

As a developer though,  I want it to be my software running in the cloud so I can make use of someone else's infrastructure in a cost effective manner.  There are currently three major CC options.  Cloud Options - Amazon, Google, & Microsoft gives a good overview of these.

The Amazon and Google diagrams below were derived from here.

Amazon Web Services

Amazon Cloud Services

The Amazon development model involves building Zen virtual machine images that are run in the cloud by EC2. That means you build your own Linux/Unix or Windows operating system image and upload it to be  run in EC2. AWS has many pre-configured images that you can start with and customize to your needs. There are web service APIs (via WSDL) for the additional support services like S3, SimpleDB, and SQS.  Because you are building self-contained OS images, you are responsible for your own development and deployment tools.

AWS is the most mature of the CC options.  Applications that require the processing of huge amounts of data can make effective you of the AWS on-demand EC2 instances which are managed by Hadoop.

If you have previous virtual machine experience (e.g. with  Microsoft Virtual PC 2007 or VirtualBox) one of the main differences working with EC2 images is that they do not provide persistent storage. The EC2 instances have anywhere from 160 GB to 1.7 TB of attached storage but it disappears as soon as the instance is shut down. If you want to save data you have to use S3, SimpleDB, or your own remote storage server.

It seems to me that having to manage OS images along with applications development could be burdensome.  On the other hand, having complete control over your operating environment gives you maximum flexibility.

A good example of using AWS is here: How We Built a Web Hosting Infrastructure on EC2.

Google AppEngine

Google App Engine

GAE allows you to run Python/Django web applications in the cloud.  Google provides a set of development tools for this purpose. i.e. You can develop your application within the GAE run-time environment on our local system and deploy it after it's been debugged and working the way you want it.

Google provides entity-based SQL-like (GQL) back-end data storage on their scalable infrastructure (BigTable) that will support very large data sets. Integration with Google Accounts allows for simplified user authentication.

From the GAE web site:  "This is a preview release of Google App Engine. For now, applications are restricted to the free quota limits."

Microsoft Windows Azure

Microsoft Windows Azure

Azure is essentially a Windows OS running in the cloud.  You are effectively uploading and running  your ASP.NET (IIS7) or .NET (3.5) application.  Microsoft provides tight integration of Azure development directly into Visual Studio 2008.

For enterprise Microsoft developers the .NET Services and SQL Data Services (SDS) will make Azure a very attractive option.  The Live Framework provides a resource model that includes access to the Microsoft Live Mesh services.

Bottom line for Azure: If you're already a .NET programmer, Microsoft is creating a very comfortable path for you to migrate to their cloud.

Azure is now in CTP and is expected to be released later this year.

UPDATE (4/27/09) Here's a good Azure article:  Patterns For High Availability, Scalability, And Computing Power With Windows Azure.

Getting Started

All three companies make it pretty easy to get software up and running in the cloud. The documentation is generally good, and each has a quick start tutorial to get you going. I tried out the Google App Engine tutorial and had Bob in the Clouds on their server in about 30 minutes.

Bob's Guest Book

Stop by and sign my cloud guest book!

Misc. Notes:

  • All three systems have Web portal tools for managing and monitoring uploaded applications.
  • The Dr. Dobbs article Computing in the Clouds has a more detailed look at AWS and GAE development.

Which is Best for You?

One of the first things that struck me about these options is how different they all are.  Because of this, from a developer's point-of-view I think you'll quickly have a gut feeling about which one best matches your current skill sets and project requirements. The development components are just one piece of the selection process puzzle though. Which one you actually might end up using (it could very well be none) will also be based on all your other technical and business needs.

UPDATE (6/23/09): Here's a good high level cloud computing discussion: Reflections on Executive Briefing Event: Cloud & RIA.  I like the phrase "Cloud Computing is Elastic" because it captures most the appealing aspects of the technology.  It's no wonder Amazon latched on to that one -- EC2.

Visualizing Spaghetti Code

Tuesday, August 19th, 2008

Is a picture of a computer program really worth a 1000 words? HP seems to think so.

Making Sense of Spaghetti Code
discusses the visual representation of source code as a marketing tool for their consulting services.  Being from California, the budget cutting complaint:

because there aren’t enough programmers who know the Cobol language used in the state’s payroll software

is pretty scary. The urban (programming) myth that there are still more lines of Cobol in use than any other language may actually be true. Scarier still!

HP's Legacy Application Transformation and Visual Intelligence Tools were reported in April: SOA picture worth 1,000 words for HP.

NDepend also has visual representations of code structure and dependencies that makes pretty pictures of .NET code:

Loading Individual Designer Default Values into Visual Studio .NET Settings

Wednesday, July 23rd, 2008

The VS.NET Settings designer creates an ApplicationSettingsBase Settings class in Settings.Designer.cs (and optionally Settings.cs).  The default values from the designer are saved in app.config and are loaded into the Settings.Default singleton at runtime.

So, now you have a button on a properties page that says 'Reset to Factory Defaults' where you want to reload the designer default values back into your properties.  If you want do this for all property values you can just use Settings.Default.Reset(). But what if you only want to reset a subset of your properties? 

There may be a better way to do this, but I couldn't find one.  The following code does the job and will hopefully save someone from having to reinvent this wheel.

The ResetToFactoryDefaults method takes a collection of SettingsPropertys and uses the DefaultValue string to reset the value. Most value types (string, int, bool, etc.) worked with the TypeConverter, but the StringCollection class is not supported so the XML string has to be deserialized manually.

These helper methods show how just selected (and all) properties can be reset.

This code was developed with VS 2005, but should also work in VS 2008.

.NET Console.Writeline Performance Issues

Wednesday, May 28th, 2008

We ran into an interesting problem recently that I have not been able to find documented anywhere.

We're doing real-time USB data acquisition with .NET 2.0. The data bandwidth and processing isn't overwhelming. Specifically, we expect data packets at 50 Hz -- every 20 ms. Yet we were having horrible delay problems. In the end we found that Console.Writeline was the culprit!

To verify this a test program was written to measure the throughput of the following loop:

The length of mstr is varied and this loop is run for about 30 seconds. The results show the ms per message for increasing message lengths:

Console.Writeline Performance

Console.Writeline is surprisingly slow!

We use log4net for our logging. With the timestamp and class information, a log message is typically greater than 100 characters. A single log message introduces at least a 20 ms delay in that case, with additional messages adding that much more. Even though debug logging would not be included in the released version, these significant delays make development difficult.

Not only do you need to make sure that there are no Console.Writeline's in your real-time threads you also need to remove the console appender (<appender-ref ref="ConsoleAppender"/>) from the log4net configuration. The log4net file appenders do not cause noticeable delays.

ALT.NET for the Rest of Us

Sunday, May 4th, 2008

If you follow Microsoft comings and goings, one of the more interesting developments (at least to me) over the last 8 months has been the formation of a community that calls themselves ALT.NET.

As explained in What is ALT .NET?, the term was coined by David Laribee last year and describes a group of

like-minded individuals within the larger world of the Microsoft® .NET Framework who felt a growing frustration that Microsoft tooling, guidance, and .NET culture at large did not reflect or support an important set of core values.

The name is misleading because even though most members are from the .NET community, the group's purpose is to promote a set of core values that are platform/language independent. To summarize from Jeremy's article:

  1. Keeping an eye out for a better way.
  2. Adopt the best of any community.
  3. Not content with the status quo -- experimenting with techniques.
  4. It's the principles and knowledge that really matter.

The members of the ALT.NET group are distinguished technologist and many are productive bloggers, e.g. codebetter.com and Ayende@Rahien. Also, the discussion group altdotnet is very active (over 6200 posts since the beginning of the year) and lively. There are also periodic group meetings (see the ALT.NET site for links) that use Open Space Technology (OST) to organize conference agendas. Check out the interesting videos (by David Laribee) from the recent conference in Seattle.

So why are ALT.NETters not like the rest of us? We're experienced developers that use modern tools and techniques, but we:

  • Have never used enterprise-class frameworks and tools (e.g. Biztalk, P&P Application Blocks, ESB, TFS, etc.).
  • Have never worked with a "Software Architect". We have always had to design and develop our own systems.
  • Have experimented with Agile development methodologies but have never been part of a "real" Agile team.
  • Think Pair programming is an April Fool's joke.
  • As with Agile, we know about all the different "driven" software development approaches, but have never had the opportunity to fully embrace any of them.
  • Have heard about Boo, Spec#, and F#, but have never used them.

This list could go on and on. Many have never used an ORM or the MVC design pattern either. The point isn't what we know versus what they know. I've talked about Stereotyping Programmers before and how it's just plain bad. I think the ALT.NET community has made a conscious effort to improve their inclusiveness.

The ALT.NET group is certainly on the cutting edge of useful and innovative software technologies and techniques. We may not understand everything they're talking about, but the conversation is well worth listening to. Someday you may be faced with a challenge that will need just the type of solutions they've been discussing.

Selecting a MVC/MVP Implementation for a Winforms Project

Friday, February 1st, 2008

I'm not a computer scientist. I'm also not one of the many über programmers that create and analyze software frameworks and techniques. I simply design and develop software that attempts to meet my customer's needs. To that end I'm always looking for the best tools available to get the job done.

Jeremy Miller states the importance of design patterns well:

I know many people blow off design patterns as ivory tower twaddle and silly jargon, but I think they're very important in regards to designing user interface code. Design patterns give us a common vocabulary that we can use in design discussions. A study of patterns opens up the accumulated wisdom of developers who have come before us.

My opinion: You don't need to be a rocket scientist to understand design patterns. Most are just common sense. Complex patterns are designed to solve complex problems. Design patterns should be thought of as a tool that you use just like any other. Don't let the 'ivory tower twaddle' scare you away.

I think most people would agree that one of the key components to creating a successful software product is quality. I've developed .NET applications in the past and have experienced the difficulty of testing and maintaining the functionality of Winform forms and components when they are created with the default Visual Studio tools. If you're not careful, here's what you end up with:

Spaghetti

Photo hat tip: Josh Smith from Using MVC to Unit Test WPF Applications -- a very good and relevant article.

I should note here that the development of software for medical devices already has rigorous verification and validation processes to ensure quality. See FDA Good Manufacturing Practice (GMP - Quality System Regulation) subpart C–Design Control (§ 820.30 sections f & g). However, these requirements do not preclude the need for development techniques that make the software more robust and maintainable. On the contrary, the higher the software quality, the easier it is to meet these standards.

I've recently spent some time trying to select a GUI architecture that will allow us to create a more robust unit testing environment. This is why I started looking at Model-View-Controller (MVC) and Model-View-Presenter (MVP) design patterns. The need for these types of design patterns is twofold:

  1. Separation of Concerns (SoC), which allows for
  2. Improved Testability.

There are many articles and blog posts that describe MVC, MVP, and their numerous variations. These techniques have been around for many years but the current corner-stone comes from these Martin Fowler articles:

Once you understand these concepts you can start to grasp the trade-offs of all of the available MVC/MVP flavors. If you're like me, one of the problems you'll run into is that there are so many different approaches (and opinions) that you'll be left wondering which is best to implement. The only advice you'll get is that it's a matter of choice. Great, thanks! From the article above, Josh puts it best:

If you put ten software architects into a room and have them discuss what the Model-View-Controller pattern is, you will end up with twelve different opinions.

This is when you turn from the theory and start looking for concrete implementations that might be suitable for your situation. Microsoft has released an ASP.NET MVC Framework as part of VS2008, but all of the Winform code samples I found were part of either blog posts or articles.

As you look at the different implementations (and relevant snippets), you quickly realize that following these design patterns requires significantly more work than just adding your application's logic directly to the IDE generated delegates. The additional work is expected and is the trade-off for improved testability.

That's fine, and worth it, but it's still time and money. We do not have the resources, or experience, to undertake a full Test-Driven Development (TDD) effort. We will implement MVC/MVP on only the displays that we feel are the most vulnerable.

I'm not going to list all of the candidate examples I looked at. I will mention that Jeremy's series of articles (here) dig deep into these issues and have lots of good code examples. Each approach has their pros and cons, just like the one I'll present here. We'll try to use it, but may end up with something else in the end. As we become more experienced, I suspect we'll evolve into a customized solution that best meets our needs.

A Promising Candidate:

Implementing the Passive View -- a Derivative of the Model-View-Control by Matthew Cochran.

Passive View — a Derivative of the Model-View-Control

This hybrid approach appealed to me for a couple of reasons. The first is that I spent several years doing Swing development, which uses a MVC that also allows multiple simultaneous views of a single model. I also like the event driven approach, which is not only heavily used in Java, but is also well supported in .NET. In any case, the View is passive and all of the important functional logic is centralized in the Controller class which can be easily tested with real or mock Model and View objects.

Matthew has done a good job of providing support generic classes that make implementation somewhat less cumbersome. The MvcControlBase class provides generic Control-View wiring while ChangeRequestEvents manages all events in a single class.

The project download provided by the article is a VS2008 solution. We're still using VS2005, but I was able to back-port the project to VS2005 with only minor modifications that had no effect on functionality. The VS2005 project is available for download here:

MVC-PV-2005.zip (23K)

Final Thoughts:

I see adoption of MVC/MCP methodology for GUI development as a critical component for improvement in software reliability, quality, and long-term maintainability. Also, structuring the application side with MVC/MVP is only half the battle. Developing an effective testing strategy must go along with it in order to achieve these objectives. Until Microsoft provides an integrated Winforms MVC solution like they did for ASP.NET, we'll just have to continue to roll our own.

I'd like to hear about your experiences, suggestions, and recommendations on any of these topics.

Thanks!

UPDATE (6-May-08): Here's a good MVC article by Jeff Atwood: Understanding Model-View-Controller

UPDATE (16-Jun-08): Another reference: Everything You Wanted To Know About MVC and MVP But Were Afraid To Ask

UPDATE (11-Sep-08): Still more: MVC vs. MVP: A Hillbilly's Journey.

Connected Systems and BizTalk

Wednesday, November 14th, 2007

Last night I went to the launch of the San Diego Connected Systems SIG (and here). Brian (along with Chris Romp) gave a great overview of BizTalk Server 2006 R2.

I have never used BizTalk and had little knowledge of its capabilities going in. BizTalk reference material and articles can be found in numerous places on the web -- a good summary is Introducing BizTalk Server 2006 R2 (pdf).

My major take-aways from the presentation were:

  • BizTalk is an enterprise class product -- i.e. a heavy weight solution designed to scale for very large business needs (global reach, high throughput, tight control and policies, highly reliable).
  • As such, the learning curve is steep.
  • BizTalk uses a message oriented architecture designed to connect disparate systems of all types.
  • Some of the key BizTalk tools include:
    • Sending and receiving messages with Adapters
    • Orchestrations
    • Business Rule Engine
    • Message processing with Pipelines
    • Message translation with Data Mapping
    • Business Activity Monitoring (BAM)
  • BizTalk uses a publish/subscribe model that allows for asynchronous message handling.
  • Most development tools are integrated into Visual Studio. Some of the visual message mapping needs present real GUI challenges.
  • BizTalk Server 2006 R2 will include Windows Communication Foundation (WCF) and Windows Workflow Foundation (WF) integration. WF will not replace Orchestrations.
  • Microsoft SQL Server is used as the back-end database and is very tightly bound to BizTalk functionality and performance. The message persistence capability of the Message Box is a powerful built-in tool.
  • The Microsoft Enterprise Service Bus (ESB) Guidance further uses BizTalk to support a loosely coupled messaging architecture.
  • BizTalk will also be a key component of Microsoft's new service-oriented architecture (SOA) framework called Oslo.

Because of its message handling architecture it's easy to see how HL7 translation and routing could be accomplished. Microsoft provides accelerators (pre-defined schema, orchestration samples, etc.) for HL7 and HIPAA for this purpose.

It's not hard to understand the importance of BizTalk in the larger Enterprise space. It appears to be benefiting from its years of prior experience and continued integration with other evolving Microsoft technologies. Overall, I was very impressed with BizTalk.

A Note on Special Interest Groups

I'm not only lucky to have a SIG like this in the area, but it's also great to have people as knowledgeable (and friendly) as Brian and Chris running it. Great job guys!

I would encourage everyone to seek out and attend their local user/developer group meetings. Don't just go for the free pizza (which usually isn't that good anyway) -- it's a great way to improve yourself both technically and professionally. You'll also get to meet new people that have the same interests as you.

I think that getting exposure to technologies that you don't use in your day-to-day work can be just as rewarding as becoming an expert in your own domain. Learning about cutting-edge software (or hardware) is exciting no matter what it is. That new knowledge and perspective also has the potential to lead you down roads that you might not have considered otherwise.