Exploring Clojure (and FP vs OOP)

I've always viewed functional programming (FP) from afar, mostly because object-oriented programming (OOP) is the dominant development methodology I've been using (Java, Ruby, C#, etc.) for many years.  A majority of the articles I've read on FP have statements like this:

If you’ve never tried functional programming development, I assure you that this is one of the best time investments you can make. You will not only learn a new programming language, but also a completely new way of thinking. A completely different paradigm.

Switching from OOP to Functional Programming gives an overview of the differences between FP and OOP. It uses Scala and Haskell for the FP example code, but I think it still does a good job of getting the major concepts across:

I do not think that FP, or any single paradigm/language/framework/etc. for that matter, is a silver bullet. On the contrary, I'm a true believer in the "right tool for the job" philosophy.  This is particularly true in the software industry where there is such a wide variety of problems that need to be solved.

This view for programming paradigms is cute but is actually misleading:

As a developer, it's important to always to be learning new problem-solving approaches. I.e. adding new tools to your tool-belt. This will not only allow you to select the best solution(s) for the job, but you'll be better able to recognize the trade-offs and potential problem areas that might arise with any solution. I think understanding FP concepts will make you a better programmer, but not necessarily because you are using FP techniques and tools.

Functional Programming Is Not a Silver Bullet sums it up best:

Don’t be tricked into thinking that functional programming, or any other popular paradigm coming before or after it, will take care of thinking about good code design instead of us.

The purpose of this article is to present my own experiences in trying to use Clojure/FP as an alternative approach to traditional OOP. I do not believe there is anything here that has not already been covered by many others, but I hope another perspective will be helpful.

Lisp

I chose a Lisp dialect for this exploration for several reasons:

  1. I have some Lisp experience from previous projects and was always impressed with its simplicity and elegance. I really wanted to dig deeper into its capabilities, particularly macros (code as data - programs that write programs). See Lisp, Smalltalk, and the Power of Symmetry for a good discussion of both topics.
  2. I'm also a long-time Emacs user (mostly for org-mode) so I'm already comfortable with Lisp.
  3. This profound advice:

For all you non-Lisp programmers out there, try LISP at least once before you die. You will not regret it.

Lisp has a long and storied history (second oldest computer language behind Fortran). Here's a good Lisp historical read: How Lisp Became God's Own Programming Language.

I investigated a variety of Lisp dialects (Racket, Common Lisp, etc.) but decided on Clojure primarily because it has both JVM and Javascript (ClojureScript) support. This would allow me more opportunity to use it for real-world projects. This is also why I did not consider alternative FP languages like Haskell and Erlang.

Lastly, the obligatory XKCD cartoon that makes fun of Lisp parentheses (which I address below):

Why Clojure?

Why Clojure? I’ll tell you why… provides a good summary and references:

  1. Simplicity
  2. Java Interoperability
  3. REPL (Read Eval Print Loop)
  4. Macro
  5. Concurrency
  6. ClojureScript
  7. Community

Here are some more reasons from Clojure - the perfect language to expand your brain?

  1. Pragmatism (videos at Clojure TV)
  2. Data processing: Sequences and laziness
  3. Built-in support for concurrency and parallelism

There are also many good articles that describe the benefits of FP (immutable objects, pure functions, etc.).

Clojure (and FP) enthusiasts claim that their productivity is increased because of these attributes. I've seen this stated elsewhere, but from the article above:

Clojure completely changed my perspective on programming. I found myself as a more productive, faster and more motivated developer than I was before.

It also has high praise from high places:

Who doesn't want to hang out with the smart kids?

Clojure development notes

The following are some Clojure development observations based on both the JSON Processor (see below) and a few other small ClojureScript projects I've worked on.

The REPL

The read-evaluate-print-loop, and more specifically the networked REPL (nREPL) and its integration with Emacs clojure-mode/cider is a real game-changer. Java has nothing like it (well, except for the Java 9 JShell, which nobody knows about) and even Ruby's IRB ("interactive ruby") is no match.

Being able to directly evaluate expressions without having to compile and run a debugger is a significant development time-saver. This is a particularly effective tool when you are writing tests.

Parentheses

A lot a people (like XKCD above) make fun of the Lisp parentheses. I think there are two considerations here:

  1. Keeping parentheses matched while editing. In Clojure, this also includes {} and []. Using a good editor is key - see Top 5 IDEs and text editors for Clojure. For me, Emacs smartparens in strict mode (i.e. don't allow mismatches at all), plus some wrap/unwrap and slurp/barf keyboard bindings, all but solved this issue.
  2. When reading Lisp code, I think parentheses get a bad rap. IMO, the confusion has more to do with the difference in basic Lisp flow control syntax than stacked parentheses.  Here's a comparison of Ruby and Lisp  if  syntax:

Once you get used to the differences, code is code. More relevantly, bad code is bad code no matter what language it is. This is important. Here's a good read on the subject: Effective Mental Models for Code and Systems ("the best code is like a good piece of writing").

Project Management

Leiningen ("automating Clojure projects without setting your hair on fire") is essentially like Java's Maven, Ruby's Rake, or Python's pip. It provides dependency management, plug-in support, applications/test runners, customized tasks, etc.

Coming from a Maven background, lein configuration management and usage made perfect sense. The best thing I can say is that it always got the job done, and even more important, it never got in the way!

Getting Answers

I found the documentation (ClojureDocs) to be very good for two reasons:

  1. Every function page has multiple examples, and some are quite extensive.  You typically only need to see one or two good examples to understand how to use a function for your purposes. Having to read the actual function description is rarely needed.
  2. Related functions. The "SEE ALSO" section provides links to functions that can usually improve your code: if  → if-not, if-let, when,... This is very helpful when you're learning a new language.

I lurked around some of the community sites (below). The threads I read were respectful and members seemed eager to help.

Clojure Report Card

Language: A

On the whole, I was very pleased with the development experience. Solving problems with Clojure really didn't seem that much different from other languages. The extensive core language capabilities along with the robust ecosystem of libraries (The Clojure Toolbox) makes Clojure a pleasure to use.

I see a lot of potential for practical uses of Clojure technologies. For example, Clojurified Electron plus reagent-forms allowed me to build a cross-platform Electron desktop form application in just a couple of days.

I was only able to explore the tip of the Clojure iceberg. Based on this initial experience, I'm really looking forward to utilizing more of the language capabilities in the future.

FG vs OOP: B

My expectations for FP did not live up to what I was able to experience in this brief exploration. The lower grade reflects the fact that the Clojure projects I've worked on were not big enough to really take advantage of the benefits of FP described above.

This is a cautionary tale for selecting any technology to solve a problem. Even though you might choose a language/framework that advertises particular benefits (FP in this case), it doesn't necessarily mean that you'll be able to take advantage of those benefits.

This also highlights the silver bullet vs good design mindset mentioned earlier. To be honest, I somehow thought that Clojure/FP would magically solve problems for me. Of course, I was wrong!

I'm sure this grade will improve for future projects!

Macros: INC (incomplete)

I was also not able to fully exercise the use of macros like I wanted to. This was also related to the nature of the projects.  I normally do DSL work with Ruby, but next time I'll be sure to try Clojure instead.

TL;DR

The rest of this article digs a little deeper into the differences between the Ruby and Clojure implementations of the JSON Processor project described below.

At the end of the day, the project includes two implementations of close to identical functionality that can be used for comparison. Both have:

    1. Simple command line parsing and validation
    2. File input/output
    3. Content caching (memoization)
    4. JSON parser and printer
    5. Recursive object (hash/map) traversal

The Ruby version (~57 lines) is about half the size of Clojure (~110 lines). This is a small example, but it does point out that Ruby is a simpler language and that there is some overhead with the Clojure/FP programming style (see Pure Functions, below).

JSON Processor

The best way to learn a new language is to try to do something useful with it. I had a relatively simple Ruby script for processing JSON files. Reproducing its functionality in Clojure was my way of experiencing the Clojure/FP approach.

The project is here:   json-processor

The Ruby version is in the ./ruby, while the Clojure version is in ./src/json_processor. See the README.md file for command line usage.

The processor is designed to simply detect a JSON key that begins with "include" and replace the include key/value pair with the contents of a file's (./<current_path>/value.json) top-level object.  Having this include capability allows reuse of JSON objects and can improve management of large JSON files.

So, if two files exist:

and base.json contains:

And level1.json contains:

After running base.json through the processor, the contents of the level1 object in the level1.json file will replace "include":"level1", with the result being:

Also, included files can contain other include files, so the implementation is a good example of a recursive algorithm.

There are example files in the ./test/resources directory that are slightly more complex and are used for the testing.

Development Environment

Immutability

Here's the Ruby recursive method:

The passed-in object is purposely modified. The Ruby each function is used to iterate over each key/value pair and replace the included content as needed. It deletes the "include" key/value pair and adds the JSON file content in its place. Again, the returned object is a modified version of the object passed to the function. 

The immutability of Clojure objects and use of reduce-kv means that an all key/value pairs need to be added to the 'init' ( m) collection ( (assoc m k v) ). This was not necessary for the Ruby implementation.

A similar comparison, but with more complexity and detailed analysis, can be found here: FP vs. OO List Processing.

Pure Functions

You'll notice in the Ruby code that the class variable @dir_name, which is created in the constructor, is used to create the JSON file path:

get_json_content(File.join(@dir_name,v)).values.first

The Clojure code has no class variables, so base_dir must be passed to all methods, like in the process-json-file  macro:

`(process-json ~base-dir (get-json ~base-dir ~file-name))))

To an OOP developer, having base_dir as a parameter in every function definition may seem redundant and wasteful. The Functional point-of-view is that:

  1. Having mutable data (@dir_name) can be the source of unintended behaviors and bugs.
  2. Pure functions will always produce the same result and have no side effects, no matter what the state of the application is.

These attributes improve reliability and allow more flexibility for future changes. This is one of the promises of FP.

Final Thought

I highly recommend giving Clojure a try!

Bad joke:

I have slurped the Clojure Kool-Aid and can now only spit good things about it.

Sorry about that. 🙂

UPDATE (22-Aug-19): More Clojure love from @unclebobmartin: Why Clojure?

Posted in Clojure, Programming, Tools | Tagged , , , , , | 1 Comment

Embedding Angular Components into a Legacy Web App

 In a perfect world, you'd be able to create a greenfield Angular SPA from scratch. In the real world, that's usually not the case. That legacy web application has way too much baggage to realistically convert it to an SPA in a single shot. This is particularly true if you're currently using server-side rendering with (e.g.) JSP or Rails technology.

The only real solution is to incrementally move/upgrade pieces of UI logic and data access patterns (i.e. converting to REST interfaces). If you are planning a move to Angular*, a good starting point is to first embed small pieces of Angular-implemented logic into your existing application. This approach also allows the new Angular components to share CSS styles for seamless visual integration.

NgInterop is a simple TypeScript class that allows a legacy Web application to have two-way communications (via pub/sub) with embedded Angular components. The underlying MessagingSerivce class is an implementation of Message Bus pattern in Angular 2 and TypeScript.

Source code for the demo project is here: embedded-angular

Highlights:

  • 6: Side note on the new Angular 6 providedIn syntax. This saves from having to add every service to the app.module.ts @NgModule providers list. Very handy!
  • 19: This saves the native JavaScript initialization callback function (see index.html below). This example only has one callback function, but it would be easy to extend this functionality to support multiple initialization callbacks.
  • 20: Add the NgInterop instance into the window object so that external JavaScript can simply call methods on window.ngInterop (again, see index.html below).
  • 32 and 38: Wrap the MessagingService subscribe/publish in a NgZone.run() call. This allows the external JavaScript to execute these functions in the Angular zone.

Other notes:

  • The typeClassMap object maps a BaseEvent class name (string) to a real class. The public static *_EVENT names provide safer access to the NgInterop functions from the Angular code.
  • There's no way to do type or parameter checking on the native JavaScript side, but it is still good practice to strongly type the BaseEvent derived classes. This provides good documentation and catches problems early on the TypeScript side.

Here is the stripped down index.html that shows how the external JavaScript code interacts with NgInterop.

Highlights:

  • 4: After subscribeToEvents() is called by the NgInterop constructor, this function subscribes to AngularEvent messages. AngularEvent messages are published when the Angular 'Toggle Remove Button' is clicked in the AppComponent class.
  • 10: On an HTML click event an HtmlEvent message is published. The subscriber to the HtmlEvent messages is also in the AppComponent class.
  • 13: The callback function is added to the window object. This is executed prior to Angular being started up.
  • All logging is done by publishing LogEvent messages. These are displayed by the LogComponent class.

The example app has two Angular components that interact with the native JavaScript as well as with each other with NgInterop. The rest of the code should be self-explanatory.

Screenshot of the example app:


This project uses the following:

  • Angular CLI -- Of course.
  • RxJS  -- Used by the MessagingService.
  • Bootstrap 4 -- For the pretty buttons and "card" layout.
  • Moment.js  -- To more easily format the log timestamp.
  • Protractor  -- For running the Angular e2e tests.

Enjoy!
*There are probably similar integration approaches for React and Vue. I just don't know what they are.

UPDATE (7/27/18):

Here's a React approach: Creating & Managing components outside React.

Posted in Programming | Tagged | 1 Comment

The Problem With Google And Why You Should Care

When I read The Case Against Google in The New York Times last week it was with a typical technology interest eye. It was like reading the local paper about hit-and-runs, robberies, or the latest political scandal. Somewhat interesting, but it really doesn't affect me (thankfully). Or so I thought.

Then Medgadget published Our Case Against Google, which is a comprehensive (and damning) indictment of Google and the "GoogleFacebook duopoly". Their bottom line:

Google is an evil monopoly.

This is not a new red flag. Even nine years ago there were concerns: Is Google a Monopoly? Just ask Stack Overflow (and me). Note that this site's Google search traffic in 2009 was 95.9%. Now it's 98.4%, mostly because there are fewer search engine competitors around today.

Here's an overly simplistic summary of the effects of these monopolistic behaviors:

  1. It kills innovation. As the Raffs journey shows, superior technology can be easily crushed.
  2. It kills high-quality content, which is well-documented in the Medgadget article.

Companies trying to innovate or content providers that are dependent on ad revenue for survival are, of course, directly affected by this. But I'm not either of those, so how does this affect me?

I'm an Android/Gmail/Google Docs&Maps person (i.e. no Apple here). I take it for granted that all of these wonderful Google-supplied technologies and conveniences are free. Google funds these goodies through their anti-competitive tactics and biased search algorithms. Does this mean that I'm benefiting from Google's bad behavior?  No duh!

So the logical conclusion is that my Google freebies aren't free after all.

Technology innovation and high-quality content are also things that I take for granted. But in reality, these are being sacrificed and are the actual cost. The struggles (and potential failure) of companies like Foundem and Medgadget is a very high price to pay, and it's happening all the time as a result of Google's behavior.

Why you should care: Monopolistic behavior carries this high price for all of us. This is true no matter what technology you use.

Modern-day technology anti-trust litigation (including the 1998 Microsoft case) involve complex legal/business/technology issues that are well worth becoming educated about.

Unfortunately, battling 800-pound Gorillas is a difficult business.  Asking this small med-tech community to raise awareness wherever possible is the least we can do.

Thanks for reading!

Update (3/22/18): Google and Facebook can’t help publishers because they’re built to defeat publishers

 

Posted in General, Google | Tagged , , , | Leave a comment

React JS Dynamic DOM Generation

I had implemented an Angular 4 dynamic DOM prototype using the Angular Dynamic Component Loader and wanted to do the same thing with React JS. After doing some research I found that it was not very obvious how to accomplish this.

By the time I was done there ended up being two functional components worth sharing:

  1. Dynamic component creation using JSX.
  2. JSON driven dynamic DOM generation.

Source code for the demo project is here: reactjs-dynamic-dom-generation

Try demo app live! (with CodeSandbox)

The project was created using create-react-app. The only other package added was axios for making the AJAX call to retrieve the JSON content.

Dynamic Component Creation

With JSX, dynamic content generation turned out to be pretty simple. The core piece of code is in DynamicComponent.js:

In the demo application, all available components register themselves via the ComponentService, which is just a singleton that maintains a simple hash map. For example:

As highlighted on lines 17-18, the desired React Component is first fetched from the ComponentService and then passed to JSX via <this.component ... />.

The JSX preprocessor converted this embedded HTML into Javascript with the 'React Element' type set to the passed Component along with the additional attributes. I.e. if the UI type was 'switch', the hard-coded HTML would have been <SwitchComponent ... /> which is a perfectly acceptable JSX template.

Voilà, we have created a dynamic DOM element!

Note that Vue.js applications using JSX can use the same technique except they pass a Vue Component instead.

JSON Driven Dynamic DOM Generation

In order to demonstrate dynamic DOM generation I have defined a simple UI JSON structure. The demo uses Bootstrap panels for the group and table elements and only implements a few components.

The UI JSON is loaded from the server when the application is started and drives the DOM generation. A DynamicComponent is passed a context (i.e. its associated JSON object) along with a path (see below). Each UI element has the following attributes:

  • name: A unique name within the current control context. It is used to form the namespace-like path that allows this component to be globally identified.
  • ui: The type of UI element (e.g. "output", "switch", etc.). This is mapped by the ComponentService to its corresponding React Component. If the UI type is not registered, the DefaultComponent is used.
  • label: Label used on the UI.
  • controls: (optional) For container components ("group", "table"), this is an array of contained controls.
  • value: (optional) For value-based components.
  • range: (optional) Specifies the min/max/step for the range component.

This structure can easily be extended to meet custom needs.

There are a number of implementation details that I'm not covering in this post. I think the demo application is simple enough that just examining and playing with the code should answer any questions. If not, please ask.

The example UI JSON file is here: example-ui.json:

The resulting output, including console logging from the switch and range components, looks like this.


This is, of course, a very minimal implementation that was designed to just demonstrate dynamic DOM generation. In particular, there is no UI event handling, data binding, or other interactive functionality that would be required to make a useful application.

Enjoy!

Posted in Programming | Tagged , , | 2 Comments

The Real Impediment to Interoperability

Medical device interoperability is one of my favorite subjects. With the meteoric rise of  IoT, there's more and more discussion like this: Why we badly need standardization to advance IoT.

The question for me has always been: Why is standardizing communications so hard to achieve?  Healthcare providers, payors, EMR vendors, etc. have their own incentives and priorities with respect to interoperability.  The following is based on my experiences as a medical device developer and has many similarities to the IoT world.  As such, these observations are probably not applicable to many parts of the healthcare domain.

The Standard API

Let's use a simple home appliance scenario to illustrate why interoperability is so important. Let's say you have a mobile application that wants to be able to control your dishwasher. It may want to start/stop operation, show wash status, or notify you when a wash is complete.  An App without and with interoperability are shown here:

Notes:

  • Without a standard API: The application has to write custom code for each dishwasher vendor. This is a significant burden for the App developer and prevents its use by a wider customer base.
  • With a standard API: New dishwasher models that implement the "dishWasher API" will just work without having to change the application (ideally anyway).  At the very least, integration of a new model is much easier.

Having a standard API that every App (and as importantly, other devices) can use to interoperate is critical for IoT (appliances and medical devices) growth. Besides all of the obvious benefits, in the healthcare industry the stakes are even higher (from Center for Medical Interoperability -- Need for Change):

It will improve the safety and quality of care, enable innovation, remove risk and cost from the system and increase patient engagement.

The other important thing to note is that the API communication shown above requires full Semantic interoperability. This is the most rigorous type of interoperability because the App must understand the full meaning of the data in order to function properly. E.g., knowing that a temperature is in ºF as opposed to ºC has significant consequences.

Let me also point out that even though semantic interoperability is not easy, the barriers to achieving it are generally not technical. There may be points of contention on protocols, units of measure, API signatures, and functional requirements, etc., but when you're working within a specific discipline these can usually be worked out.  Non-healthcare industries (telecom, banking, etc.) have proven it can be done.

Cost of Standards

There are a number of adoption hurdles for using standards (e.g. HL7, FHIR, etc.). The cost of implementing and maintaining compliance with a standard are non-trivial.

  • The additional development and testing overhead required. On the development side, these interfaces are many times not ideal for internal communication and can have a performance impact.
  • Some standards have a certification process (e.g. Continua Certification Process) that require rigorous testing and documentation to achieve.
  • If you have a data element that the standard does not currently cover, you may be faced with having to deal with the standard's approval process which can take a significant amount of time. For example, see the FHIR Change Request Tracking System which currently has thousands of entries. Again, this is not a technical issue. Having to deal with bureaucracy is just part of the overhead of conforming to a standard.

Company Motivations

Now let's try to understand what's important to a company that's trying to develop and market a product:

  1. Product differentiation. Provide vendor-unique features (a "niche") that are not available from competitors.
  2. Time to market. Being there first is critical for brand recognition and attracting customers.
  3. One-stop shop (multi-product companies). "If you use our product family your experience will be seamless!"

The last item is particularly important. Following the appliance theme:

This strategy is of course how Apple became the largest company in the world. In most industries, the big companies have the largest market share. This "walled garden" approach is the most natural way to lock out both large and small competitors.

The First Hint of Problems

Notice that the cost of interoperability can affect all three of the market goals a company is trying to achieve. Standards are:

  1. A "me too" feature.
  2. They take time to implement.
  3. They punch holes in the desirable closed platform.

The actual impact depends on a lot of factors, but it can be significant.

The Real Impediment

But the real elephant in the room is this: Return on Investment:

The ROI on interoperability is inherently very low and often negative (Gain < Cost). This is because:

  1. As noted above, conforming to an external standard has a significant cost associated with it.
  2. Lack of demand. Interoperability is not something a customer is willing to pay extra for (zero Gain).

I think companies really do care about patient safety, quality of care, and healthcare cost reduction. This is what motivates their business and drives innovation. The reality is that ROI is also a factor in every product decision.

Side note: If conforming to a standard was mandated as a regulatory requirement, then the ROI becomes moot and the expense would just be part of the cost of doing business.

I'm sure that interoperability is on every company's feature backlog, but it's not likely to become a primary actionable priority over all of the other higher ROI functionality. Those other features also contribute to improving healthcare, but the bottom line is hard to ignore.

Contributing resources and putting a logo on a standards organization's sponsor website is not the same thing as actually implementing and maintaining real interoperability in a product.

Apologies for the cynicism. It's just frustrating that nothing has really changed after all these years. Interoperability: Arrested Progress is close to four years old, and same old, same old (insanity) still prevails.

I think the reasons outlined here are a plausible explanation of why this is so. We're all still waiting for that game-changer.

Posted in Interoperability, Medical Devices | Tagged | 1 Comment

Canine Mind Reading

That's right! It is wallace-shawn-inconceivable that the Indegogo No More Woof campaign raised over $22,000 from 231 contributors. The project has been around since late 2013, but this is the first time I've run across it (via the recent IEEE article below). I just couldn't resist posting the picture.

It goes without saying that the Scandinavian-based company NSID (currently "hibernating") failed to deliver on its promise. This is well chronicled by IEEE: The Cautionary Tale of "No More Woof," a Crowdfunded Gadget to Read Your Dog's Thoughts.

The article even mentions Melon, a human EEG headband Kickstarter that I was involved with. I feel fortunate that I actually received a working device.

BCI is very difficult even under the best of circumstances with humans. I think the correct thought sequence for working with any EEG-based device is:

  1. "I'm excited"
  2. "I'm curious who that is?"
  3. "I'm tired"

Woof!

Posted in EEG | Tagged | Leave a comment

Publishing an Angular 2 Component NPM Package

It was suggested on the Angular 2 Password Strength Bar post that I publish the component as an NPM package. This sounded like a good sharing idea and it was something I've never done before. So, here it is.

You should go to the Github repository and inspect the code directly. I'm just going to note some non-obvious details here.

Application Notes:

  • Added in-line CSS with the @Component styles metadata property.
  • In addition to passwordToCheck, added client configurable barLabel parameter.

Project Notes:

  • src: This is where the PasswordStrengthBar component (passwordStrengthBar.component.ts) is. The CSS and HTML are embedded directly in the @Component metadata. Also, note that the tsconfig.json compiles the Typescript to ../lib which is what is distributed to NPM (app and src are excluded in .npmignore).
  • The index.d.ts and index.js in the root directory reference ./lib to allow importing the component without having to specify a Typescript file.  See the How TypeScript resolves modules section in TypeScript Module Resolution.  I.e. after the npm installation is complete you just need this:

Development Notes:

Overall (and briefly), I find the Typescript/Javascript tooling very frustrating. I'm not alone, e.g.: The Controversial State of JavaScript Tooling. The JSON configuration files (npm (package.json), Typescript, Karma, Webpack, etc.) are complex and the documentation is awful.

The worst part (IMO) is how fragile everything is. It seems like the tools and libraries change rapidly and there's no consideration for backward compatibility or external dependencies. Updating versions invariably breaks the build. On-line fixes many times take you down a rabbit hole of unrelated issues. If you're lucky, the solution is just to continue to use an older version. Use npm-check-updates at your own risk!

Feedback:

If you have questions or problems, find a bug, or have suggested improvements please open an issue. Even better, fork the project, make the desired changes and submit a pull request.

Enjoy!

Posted in Programming | Tagged , , , | Leave a comment

Angular 2 Password Strength Bar

I spent a little time updating AngularJS Directive to test the strength of a password to be a pure Angular 2 component and thought I'd share.

A working demo and all of the code can be found here: Angular 2 Password Strength Bar.

Notes:

  • Upgraded to Typescript and used the OnChanges interface.
  • Incorporation of the bar is now component-based:

<password-strength-bar [passwordToCheck]="account.password"></password-strength-bar>

  • Removed direct DOM modification and replaced with Angular 2 dynamic in-line styles.
  • Removed JQuery dependence.

Enjoy!

Posted in Programming | Tagged , | 3 Comments

Old Nerds

Nobody is immune from aging.

In the tech industry, this can be a problem as described in Is Ageism In Tech An Under-The-Radar Diversity Issue?.  Programmer age distribution from the Stack Overflow Developer Survey 2016 Results clearly shows this:

2016-Stack Overflow Developer Survey 2016 Results

Worth noting:

  • 77.2% are younger than 35.
  • Twice as many are < 20 then are over 50.

Getting old may suck, but if problem-solving and building solutions are your passion being an old nerd (yes, I'm way over 35) really can look like this:
nerd-age-satisfaction
There's a lot of reasonable advice in Being A Developer After 40, but I think this sums it up best:

As long as your heart tells you to keep on coding and building new things, you will be young, forever.

I sure hope so! 🙂

UPDATE 13-Oct-16: Too Old for IT

Posted in General, Programming | Tagged | Leave a comment