Embedding Angular Components into a Legacy Web App

July 25th, 2018

 In a perfect world, you'd be able to create a greenfield Angular SPA from scratch. In the real world, that's usually not the case. That legacy web application has way too much baggage to realistically convert it to an SPA in a single shot. This is particularly true if you're currently using server-side rendering with (e.g.) JSP or Rails technology.

The only real solution is to incrementally move/upgrade pieces of UI logic and data access patterns (i.e. converting to REST interfaces). If you are planning a move to Angular*, a good starting point is to first embed small pieces of Angular-implemented logic into your existing application. This approach also allows the new Angular components to share CSS styles for seamless visual integration.

NgInterop is a simple TypeScript class that allows a legacy Web application to have two-way communications (via pub/sub) with embedded Angular components. The underlying MessagingSerivce class is an implementation of Message Bus pattern in Angular 2 and TypeScript.

Source code for the demo project is here: embedded-angular

Highlights:

  • 6: Side note on the new Angular 6 providedIn syntax. This saves from having to add every service to the app.module.ts @NgModule providers list. Very handy!
  • 19: This saves the native JavaScript initialization callback function (see index.html below). This example only has one callback function, but it would be easy to extend this functionality to support multiple initialization callbacks.
  • 20: Add the NgInterop instance into the window object so that external JavaScript can simply call methods on window.ngInterop (again, see index.html below).
  • 32 and 38: Wrap the MessagingService subscribe/publish in a NgZone.run() call. This allows the external JavaScript to execute these functions in the Angular zone.

Other notes:

  • The typeClassMap object maps a BaseEvent class name (string) to a real class. The public static *_EVENT names provide safer access to the NgInterop functions from the Angular code.
  • There's no way to do type or parameter checking on the native JavaScript side, but it is still good practice to strongly type the BaseEvent derived classes. This provides good documentation and catches problems early on the TypeScript side.

Here is the stripped down index.html that shows how the external JavaScript code interacts with NgInterop.

Highlights:

  • 4: After subscribeToEvents() is called by the NgInterop constructor, this function subscribes to AngularEvent messages. AngularEvent messages are published when the Angular 'Toggle Remove Button' is clicked in the AppComponent class.
  • 10: On an HTML click event an HtmlEvent message is published. The subscriber to the HtmlEvent messages is also in the AppComponent class.
  • 13: The callback function is added to the window object. This is executed prior to Angular being started up.
  • All logging is done by publishing LogEvent messages. These are displayed by the LogComponent class.

The example app has two Angular components that interact with the native JavaScript as well as with each other with NgInterop. The rest of the code should be self-explanatory.

Screenshot of the example app:


This project uses the following:

  • Angular CLI -- Of course.
  • RxJS  -- Used by the MessagingService.
  • Bootstrap 4 -- For the pretty buttons and "card" layout.
  • Moment.js  -- To more easily format the log timestamp.
  • Protractor  -- For running the Angular e2e tests.

Enjoy!
*There are probably similar integration approaches for React and Vue. I just don't know what they are.

UPDATE (7/27/18):

Here's a React approach: Creating & Managing components outside React.

The Problem With Google And Why You Should Care

March 3rd, 2018

When I read The Case Against Google in The New York Times last week it was with a typical technology interest eye. It was like reading the local paper about hit-and-runs, robberies, or the latest political scandal. Somewhat interesting, but it really doesn't affect me (thankfully). Or so I thought.

Then Medgadget published Our Case Against Google, which is a comprehensive (and damning) indictment of Google and the "GoogleFacebook duopoly". Their bottom line:

Google is an evil monopoly.

This is not a new red flag. Even nine years ago there were concerns: Is Google a Monopoly? Just ask Stack Overflow (and me). Note that this site's Google search traffic in 2009 was 95.9%. Now it's 98.4%, mostly because there are fewer search engine competitors around today.

Here's an overly simplistic summary of the effects of these monopolistic behaviors:

  1. It kills innovation. As the Raffs journey shows, superior technology can be easily crushed.
  2. It kills high-quality content, which is well-documented in the Medgadget article.

Companies trying to innovate or content providers that are dependent on ad revenue for survival are, of course, directly affected by this. But I'm not either of those, so how does this affect me?

I'm an Android/Gmail/Google Docs&Maps person (i.e. no Apple here). I take it for granted that all of these wonderful Google-supplied technologies and conveniences are free. Google funds these goodies through their anti-competitive tactics and biased search algorithms. Does this mean that I'm benefiting from Google's bad behavior?  No duh!

So the logical conclusion is that my Google freebies aren't free after all.

Technology innovation and high-quality content are also things that I take for granted. But in reality, these are being sacrificed and are the actual cost. The struggles (and potential failure) of companies like Foundem and Medgadget is a very high price to pay, and it's happening all the time as a result of Google's behavior.

Why you should care: Monopolistic behavior carries this high price for all of us. This is true no matter what technology you use.

Modern-day technology anti-trust litigation (including the 1998 Microsoft case) involve complex legal/business/technology issues that are well worth becoming educated about.

Unfortunately, battling 800-pound Gorillas is a difficult business.  Asking this small med-tech community to raise awareness wherever possible is the least we can do.

Thanks for reading!

Update (3/22/18): Google and Facebook can’t help publishers because they’re built to defeat publishers

 

React JS Dynamic DOM Generation

September 13th, 2017

I had implemented an Angular 4 dynamic DOM prototype using the Angular Dynamic Component Loader and wanted to do the same thing with React JS. After doing some research I found that it was not very obvious how to accomplish this.

By the time I was done there ended up being two functional components worth sharing:

  1. Dynamic component creation using JSX.
  2. JSON driven dynamic DOM generation.

Source code for the demo project is here: reactjs-dynamic-dom-generation

Try demo app live! (with CodeSandbox)

The project was created using create-react-app. The only other package added was axios for making the AJAX call to retrieve the JSON content.

Dynamic Component Creation

With JSX, dynamic content generation turned out to be pretty simple. The core piece of code is in DynamicComponent.js:

In the demo application, all available components register themselves via the ComponentService, which is just a singleton that maintains a simple hash map. For example:

As highlighted on lines 17-18, the desired React Component is first fetched from the ComponentService and then passed to JSX via <this.component ... />.

The JSX preprocessor converted this embedded HTML into Javascript with the 'React Element' type set to the passed Component along with the additional attributes. I.e. if the UI type was 'switch', the hard-coded HTML would have been <SwitchComponent ... /> which is a perfectly acceptable JSX template.

Voilà, we have created a dynamic DOM element!

Note that Vue.js applications using JSX can use the same technique except they pass a Vue Component instead.

JSON Driven Dynamic DOM Generation

In order to demonstrate dynamic DOM generation I have defined a simple UI JSON structure. The demo uses Bootstrap panels for the group and table elements and only implements a few components.

The UI JSON is loaded from the server when the application is started and drives the DOM generation. A DynamicComponent is passed a context (i.e. its associated JSON object) along with a path (see below). Each UI element has the following attributes:

  • name: A unique name within the current control context. It is used to form the namespace-like path that allows this component to be globally identified.
  • ui: The type of UI element (e.g. "output", "switch", etc.). This is mapped by the ComponentService to its corresponding React Component. If the UI type is not registered, the DefaultComponent is used.
  • label: Label used on the UI.
  • controls: (optional) For container components ("group", "table"), this is an array of contained controls.
  • value: (optional) For value-based components.
  • range: (optional) Specifies the min/max/step for the range component.

This structure can easily be extended to meet custom needs.

There are a number of implementation details that I'm not covering in this post. I think the demo application is simple enough that just examining and playing with the code should answer any questions. If not, please ask.

The example UI JSON file is here: example-ui.json:

The resulting output, including console logging from the switch and range components, looks like this.


This is, of course, a very minimal implementation that was designed to just demonstrate dynamic DOM generation. In particular, there is no UI event handling, data binding, or other interactive functionality that would be required to make a useful application.

Enjoy!

The Real Impediment to Interoperability

February 7th, 2017

Medical device interoperability is one of my favorite subjects. With the meteoric rise of  IoT, there's more and more discussion like this: Why we badly need standardization to advance IoT.

The question for me has always been: Why is standardizing communications so hard to achieve?  Healthcare providers, payors, EMR vendors, etc. have their own incentives and priorities with respect to interoperability.  The following is based on my experiences as a medical device developer and has many similarities to the IoT world.  As such, these observations are probably not applicable to many parts of the healthcare domain.

The Standard API

Let's use a simple home appliance scenario to illustrate why interoperability is so important. Let's say you have a mobile application that wants to be able to control your dishwasher. It may want to start/stop operation, show wash status, or notify you when a wash is complete.  An App without and with interoperability are shown here:

Notes:

  • Without a standard API: The application has to write custom code for each dishwasher vendor. This is a significant burden for the App developer and prevents its use by a wider customer base.
  • With a standard API: New dishwasher models that implement the "dishWasher API" will just work without having to change the application (ideally anyway).  At the very least, integration of a new model is much easier.

Having a standard API that every App (and as importantly, other devices) can use to interoperate is critical for IoT (appliances and medical devices) growth. Besides all of the obvious benefits, in the healthcare industry the stakes are even higher (from Center for Medical Interoperability -- Need for Change):

It will improve the safety and quality of care, enable innovation, remove risk and cost from the system and increase patient engagement.

The other important thing to note is that the API communication shown above requires full Semantic interoperability. This is the most rigorous type of interoperability because the App must understand the full meaning of the data in order to function properly. E.g., knowing that a temperature is in ºF as opposed to ºC has significant consequences.

Let me also point out that even though semantic interoperability is not easy, the barriers to achieving it are generally not technical. There may be points of contention on protocols, units of measure, API signatures, and functional requirements, etc., but when you're working within a specific discipline these can usually be worked out.  Non-healthcare industries (telecom, banking, etc.) have proven it can be done.

Cost of Standards

There are a number of adoption hurdles for using standards (e.g. HL7, FHIR, etc.). The cost of implementing and maintaining compliance with a standard are non-trivial.

  • The additional development and testing overhead required. On the development side, these interfaces are many times not ideal for internal communication and can have a performance impact.
  • Some standards have a certification process (e.g. Continua Certification Process) that require rigorous testing and documentation to achieve.
  • If you have a data element that the standard does not currently cover, you may be faced with having to deal with the standard's approval process which can take a significant amount of time. For example, see the FHIR Change Request Tracking System which currently has thousands of entries. Again, this is not a technical issue. Having to deal with bureaucracy is just part of the overhead of conforming to a standard.

Company Motivations

Now let's try to understand what's important to a company that's trying to develop and market a product:

  1. Product differentiation. Provide vendor-unique features (a "niche") that are not available from competitors.
  2. Time to market. Being there first is critical for brand recognition and attracting customers.
  3. One-stop shop (multi-product companies). "If you use our product family your experience will be seamless!"

The last item is particularly important. Following the appliance theme:

This strategy is of course how Apple became the largest company in the world. In most industries, the big companies have the largest market share. This "walled garden" approach is the most natural way to lock out both large and small competitors.

The First Hint of Problems

Notice that the cost of interoperability can affect all three of the market goals a company is trying to achieve. Standards are:

  1. A "me too" feature.
  2. They take time to implement.
  3. They punch holes in the desirable closed platform.

The actual impact depends on a lot of factors, but it can be significant.

The Real Impediment

But the real elephant in the room is this: Return on Investment:

The ROI on interoperability is inherently very low and often negative (Gain < Cost). This is because:

  1. As noted above, conforming to an external standard has a significant cost associated with it.
  2. Lack of demand. Interoperability is not something a customer is willing to pay extra for (zero Gain).

I think companies really do care about patient safety, quality of care, and healthcare cost reduction. This is what motivates their business and drives innovation. The reality is that ROI is also a factor in every product decision.

Side note: If conforming to a standard was mandated as a regulatory requirement, then the ROI becomes moot and the expense would just be part of the cost of doing business.

I'm sure that interoperability is on every company's feature backlog, but it's not likely to become a primary actionable priority over all of the other higher ROI functionality. Those other features also contribute to improving healthcare, but the bottom line is hard to ignore.

Contributing resources and putting a logo on a standards organization's sponsor website is not the same thing as actually implementing and maintaining real interoperability in a product.

Apologies for the cynicism. It's just frustrating that nothing has really changed after all these years. Interoperability: Arrested Progress is close to four years old, and same old, same old (insanity) still prevails.

I think the reasons outlined here are a plausible explanation of why this is so. We're all still waiting for that game-changer.

Canine Mind Reading

January 24th, 2017

That's right! It is wallace-shawn-inconceivable that the Indegogo No More Woof campaign raised over $22,000 from 231 contributors. The project has been around since late 2013, but this is the first time I've run across it (via the recent IEEE article below). I just couldn't resist posting the picture.

It goes without saying that the Scandinavian-based company NSID (currently "hibernating") failed to deliver on its promise. This is well chronicled by IEEE: The Cautionary Tale of "No More Woof," a Crowdfunded Gadget to Read Your Dog's Thoughts.

The article even mentions Melon, a human EEG headband Kickstarter that I was involved with. I feel fortunate that I actually received a working device.

BCI is very difficult even under the best of circumstances with humans. I think the correct thought sequence for working with any EEG-based device is:

  1. "I'm excited"
  2. "I'm curious who that is?"
  3. "I'm tired"

Woof!

Publishing an Angular 2 Component NPM Package

December 9th, 2016

It was suggested on the Angular 2 Password Strength Bar post that I publish the component as an NPM package. This sounded like a good sharing idea and it was something I've never done before. So, here it is.

You should go to the Github repository and inspect the code directly. I'm just going to note some non-obvious details here.

Application Notes:

  • Added in-line CSS with the @Component styles metadata property.
  • In addition to passwordToCheck, added client configurable barLabel parameter.

Project Notes:

  • src: This is where the PasswordStrengthBar component (passwordStrengthBar.component.ts) is. The CSS and HTML are embedded directly in the @Component metadata. Also, note that the tsconfig.json compiles the Typescript to ../lib which is what is distributed to NPM (app and src are excluded in .npmignore).
  • The index.d.ts and index.js in the root directory reference ./lib to allow importing the component without having to specify a Typescript file.  See the How TypeScript resolves modules section in TypeScript Module Resolution.  I.e. after the npm installation is complete you just need this:

Development Notes:

Overall (and briefly), I find the Typescript/Javascript tooling very frustrating. I'm not alone, e.g.: The Controversial State of JavaScript Tooling. The JSON configuration files (npm (package.json), Typescript, Karma, Webpack, etc.) are complex and the documentation is awful.

The worst part (IMO) is how fragile everything is. It seems like the tools and libraries change rapidly and there's no consideration for backward compatibility or external dependencies. Updating versions invariably breaks the build. On-line fixes many times take you down a rabbit hole of unrelated issues. If you're lucky, the solution is just to continue to use an older version. Use npm-check-updates at your own risk!

Feedback:

If you have questions or problems, find a bug, or have suggested improvements please open an issue. Even better, fork the project, make the desired changes and submit a pull request.

Enjoy!

Angular 2 Password Strength Bar

September 28th, 2016

I spent a little time updating AngularJS Directive to test the strength of a password to be a pure Angular 2 component and thought I'd share.

A working demo and all of the code can be found here: Angular 2 Password Strength Bar.

Notes:

  • Upgraded to Typescript and used the OnChanges interface.
  • Incorporation of the bar is now component-based:

<password-strength-bar [passwordToCheck]="account.password"></password-strength-bar>

  • Removed direct DOM modification and replaced with Angular 2 dynamic in-line styles.
  • Removed JQuery dependence.

Enjoy!

Old Nerds

April 27th, 2016

Nobody is immune from aging.

In the tech industry, this can be a problem as described in Is Ageism In Tech An Under-The-Radar Diversity Issue?.  Programmer age distribution from the Stack Overflow Developer Survey 2016 Results clearly shows this:

2016-Stack Overflow Developer Survey 2016 Results

Worth noting:

  • 77.2% are younger than 35.
  • Twice as many are < 20 then are over 50.

Getting old may suck, but if problem-solving and building solutions are your passion being an old nerd (yes, I'm way over 35) really can look like this:
nerd-age-satisfaction
There's a lot of reasonable advice in Being A Developer After 40, but I think this sums it up best:

As long as your heart tells you to keep on coding and building new things, you will be young, forever.

I sure hope so! 🙂

UPDATE 13-Oct-16: Too Old for IT

Melon Headband Android SDK

March 21st, 2016

It appears that the Melon Headband Alpha Android SDK is no longer available from Melon. See Melon Headband — Android Beta.

Below is a copy of the SDK that I received in April 2015. I successfully built and ran the AndroidMelonBasicSample application on my Motorola phone. It actually communicated with the Melon headband!

Melon was purchased by DAQRI in February 2015. They still maintain a Melon product page, but the Google+ Melon Headband - Android Users community (see update below) has been all but silent for over 6 months.  That plus the website message "We're back in the lab crafting new things" is a good indication that Melon development is no longer active.

Download: AndroidMelonSDKSample.zip.

Update (4/6/16): The community has shut down:

melon-last-post

Deep Learning

December 6th, 2015

deepLearningAI500I recently attended a Deep Learning (DL) meetup hosted by Nervana Systems. Deep learning is essentially a technique that allows machines to interpret sensory data. DL attempts to classify unstructured data (e.g. images or speech) by mimicking the way the brain does so with the use of artificial neural networks (ANN).

A more formal definition of deep learning is:

DL is a branch of machine learning based on a set of algorithms that attempt to model high-level abstractions in data by using multiple processing layers with complex structures,

I like the description from Watson Adds Deep Learning to Its Repertoire:

Deep learning involves training a computer to recognize often complex and abstract patterns by feeding large amounts of data through successive networks of artificial neurons, and refining the way those networks respond to the input.

This article also presents some of the DL challenges and the importance of its integration with other AI technologies.

From a programming perspective constructing, training, and testing DL systems starts with assembling ANN layers.

For example, categorization of images is typically done with Convolution Neural Networks (CNNs, see Introduction to Convolution Neural Networks). The general approach is shown here:

Construction of a similar network using the neon framework looks something like this:

Properly training an ANN involves processing very large quantities of data. Because of this, most frameworks (see below) utilize GPU hardware acceleration. Most use the NVIDIA CUDA Toolkit.

Each application of DL (e.g. image classification, speech recognition, video parsing, big data, etc.) have their own idiosyncrasies that are the subject of extensive research at many universities. And of course large companies are leveraging machine intelligence for commercial purposes (Siri, Cortana, self-driving cars).

Popular DL/ANN frameworks include:

Many good DL resources are available at: Deep Learning.

Here's a good introduction: Deep Learning: An MIT Press book in preparation