Author Archive

Introducing elfeed-curate

Monday, October 2nd, 2023

The Need

I read a lot of RSS feeds on a select set of topics (see About | Bob's Content of Interest). I sometimes tweet/toot about individual posts but have the desire to expand that capability. I'd like to be able to regularly (and efficiently) publish a curated collection of articles. There are three primary functional requirements needed to accomplish this:

Collection: Deciding which articles you want to export (publish). Filtering can be done based on their title, subject matter (tags), and time constraints. My preference is to specifically mark (or tag) selected entries independent of their subject matter (see below).

Annotation: Some article titles speak for themselves, but others are best presented with associated comments that allow the reader to know what's special about the content. You need the ability to add annotations to individual articles that are included in the published result.

Publication: Once a set of articles has been identified, exporting them in an easily consumable format is the next step. One important component of exporting this content is grouping the articles based on their subject matter.

The Investigation

There are many RSS feed aggregators out there, but there was no solution that even came near to addressing the curation requirements listed above. Apparently, all of those link collection sites are just rolling their own.

As a software developer, finding such a glaring functionality gap that needs to be filled is a real win-win! 🎉 Not only is this something I want to use, but there are probably a few others who will also find a solution helpful.

Now all I had to do was design and develop that solution. I've been using Emacs and Elfeed as my RSS reader for many years. Extending Emacs functionality is a cult-like activity that attracts many. I'm not brain-washed, but even as a (non-evil) Doom user, I do spend a lot of time tweaking my Emacs configuration.

Anyway, providing this RSS curation functionality as an elfeed extension was not only the ideal technical solution, but it was also the perfect opportunity to author my first Emacs package (another win, I hope).

The Solution

Elfeed-curate is an add-on to the elfeed Emacs-based RSS feed management system that provides the ability to easily curate RSS feed entries.

Elfeed's tagging and search functionality takes care of the collection requirements and elfeed-curate adds annotation and publication (exporting) capabilities.

I have an opinionated workflow that looks like this:

See Curation Workflow for details.

A key factor (essentially, a non-functional requirement) for making this workflow practical is that each step (marking, annotation, export review, etc.) has to be fast. I think the combination of elfeed and elfeed-curate accomplishes this. I'm also sure there will be refinements and improvements in the future.

Export example

The same content exported to Hugo is here: 21-Sep-2023 Content of Interest

Feedback is always welcome. Thanks!

Dealing with ClojureScript Cross-build NPM Dependencies

Sunday, October 2nd, 2022

I recently posted this Clojurians Slack re-frame question:

I have a deps.edn/figwheel-main re-frame project that I'm trying to add re-frame-10x to. The deps.edn (day8.re-frame/re-frame-10x {:mvn/version "1.5.0"}) and dev.cjls.edn (:preloads [day8.re-frame-10x.preload]) seem correct and the project builds without errors. When the web app is started though, I get this run-time error:

Uncaught Error: Bad dependency path or symbol: highlight.js.lib.core

I and others have also seen this type of build-time error:

No such namespace: highlight.js/lib/core, could not locate highlight/js_SLASH_lib_SLASH_core.cljs, highlight/js_SLASH_lib_SLASH_core.cljc, or JavaScript source providing "highlight.js/lib/core" (Please check that namespaces with dashes use underscores in the ClojureScript file name) in file target/public/cljs-out/dev/re_highlight/core.cljs

Both errors are originating from re-highlight/core.cljs:

re-highlight (a re-frame-10x dependency) is built with shadow-cljs while the re-frame project is built with lein/deps.edn/figwheel-main. The re-frame project does not have a dependency on either re-highlight or highlight.js.  The challenge here is providing the highlight.js (an NPM library) dependencies to re-highlight.

There are ways to include NPM libraries in ClojureScript projects, but each is specific to a particular build system:

This cross-build system situation seems unique though. Providing the highlight.js library to re-highlight turned out to be an eye-opening deep dive into ClojureScript build systems. See the Commentary section at the end of this post.

Long story short, I was able to find a solution for exposing NPM libraries to shadow-cljs projects included in a deps.edn build.

It involves creating a Javascript bundle containing the needed NPM dependency (highlight.js) using webpack and then "manually" making it available to re-highlight.

Here is a step-by-step guide for adding re-frame-10x to a deps.edn re-reframe project.

First, create package.json with the needed dependencies.

Final package.json:

Create src/js/main.js with these contents. The require() statements are needed so webpack will include the NPM libraries in the output bundle.

Create webpack.config.js with these contents:

Add the following lines before the complied app.js in resources/public/index.html.

This is where the magic is happening.

  • The highlight.js dependencies are being manually added with goog.addDependency() before the rest of the application dependencies are loaded.
  • The Clojurescript compiler-generated app.js has the following code. We're just loading goog/base.js first so the goog functions are defined.

Note: The paths and JS build file names (e.g. app.js) above may not match your specific project structure. If so, they would need to be adjusted accordingly.

Add re-frame-10x dependencies to deps.edn:

The dev.cljs.edn file needs the following so re-frame-10x is loaded properly:

Run the project:

Voilà! The highlight.js dependency in re-highlight is satisfied and re-frame-10x runs as expected.

There must be a better way to do this. I just couldn't find it...

Commentary

This solution seems rather hacky and took way too long to discover. I can't tell you the number of rabbit holes I went down with the NPM inclusion methods listed above. Each of them just uncovered further dependency and configuration issues or would have resulted in undesirable refactoring. The code base I'm working with is rather large and I didn't want to completely change the build system just to add a development tool.

To be honest, the CLJ/CLJS build tools and their cross-pollinated dependency systems (lein, shadown-cljs, etc.) are very confusing. There is no idiomatic/standard way to build Clojure(Script) projects. Everyone is using a different combination or permutation of build systems.  Also, the clojure/clj CLI and tooling just plain suck.  I think these things are a real barrier to Clojure(Script) adoption.

On Selecting Clojure

Monday, March 8th, 2021

The Clojure and Scheme Compared comments about Peter Bex's Clojure from a Schemer's perspective article caught my attention. I know that discussion is about language features, but it got me thinking about the different criteria used for selecting a programming language like Clojure. E.g., it's interesting that Irreal considers the JVM a negative, while I consider it a positive. It just goes to show that every situation is unique, i.e. there is no right or wrong in these types of technology decisions.

I've only dabbled with Clojure over the last few years. See Exploring Clojure (and FP vs OOP). The real motivation was to explore the advantages of functional programming. Shifting your fundamental programming paradigm from OOP to FP has far-reaching impact. Language warts are not going to be a major factor in determining your success in doing this type of transformation.

The other major language/technology selection considerations involve organizational headwinds. For most large companies, there are three major challenges:

  1. Inertia. Convincing management that an esoteric language and techniques (FP) are worth diverting and training existing personnel is a difficult hill to climb. I also think there's a certain amount of organizational entrenchment going on here. For example, the C#/CLR vs. Java/JVM divide is really more cultural than technical. Because niche technologies like Clojure/FP are also generally viewed in this cultural context ("esoteric"), they don't stand a chance.
  2. Talent. Unless you're in Finland, it's difficult to find qualified people. This is also an on-going issue because programs developed today need to be maintained for the long-term (years). With all-remote employment now being more mainstream, maybe this will become less of an issue.
  3. Trends. As you can see from the 5-year trends below, Clojure has been on a slow decline. Also, Lisp languages are minor players. They are orders of magnitude smaller than the mainstream (Java, JavaScript, Python) and do not show up on the TIOBE top 20 lists (Lisp is #36).

Even if you're a small company or a startup, selecting Clojure is still a tough call. This would likely only happen if there were a critical mass of programmers (#2) that already had positive Clojure/FP development experiences.

There are many good reasons to choose Clojure as a front-end (JavaScript) and/or back-end (JVM) technology solution. I would love to see first-hand how well Clojure/FP performs on a large-scale project.  Unfortunately, there are also plenty of non-technical reasons that prevent organizations from choosing Clojure.

Full Cycle Teams in a FDA regulated setting

Monday, January 4th, 2021

Full Cycle DevelopersThe 200X hot topic was Agile development in a FDA regulated setting. Over a decade later this should (hopefully) be a settled issue. I can’t imagine anyone still doing water-fall these days. The new challenge for medical device companies is implementing Full Cycle Teams (FCTs), which is well described in Full Cycle Developers at Netflix — Operate What You Build.

This organizational structure increases the speed of feature delivery and allows for experimentation to further improve the customer experience. Tooling and automation ("paved roads") are key. The model that Netflix came up with:

"Full cycle developers" is a model where a development team, equipped with amazing developer productivity tools, is responsible for the full software life cycle: design, development, test, deploy, operate, and support.

If you work for a large enough enterprise, you likely have teams of people that provide the following functions:

  • Product development (creates and designs applications software and includes architects, product owners, and scrum masters)
  • Quality assurance (QA). They test the software. For a medical device company, we call this team Verification and Validation (V&V)
  • Site Reliability Engineering (SRE). Ensures scalability and reliability of the infrastructure and applications. They do performance testing and may implement some Chaos engineering techniques.
  • Development operations (DevOps). Manage the code repositories, shared development tools, CI/CD pipelines, middleware, databases, etc.
  • Infrastructure management (on-prem hardware and operating systems)
  • Cloud management (same as above,  but in the cloud)
  • Applications support (monitor and manage applications in production)

Do not confuse FCTs with "Full Stack Teams" (see Full Stack Pronounced Dead). This "stack" refers to technologies that are used to implement a typical web-based application (e.g. LAMP).

FCTs are about supporting functionality end-to-end (product idea to production), but both have the challenge of developer specialization in common. A FCT has to broaden their skill-set even further to include application/infrastructure deployment, monitoring, and support. This is the future!

Full Cycle Team Challenges for Medical Device Companies

The transformation from a legacy organization (as described above) to FCTs is made even more challenging for a medical device company creating software that has to maintain FDA regulatory controls (see Quality System Regulation Subpart C–Design Control § 820.30).

Below is a list of regulatory and transition considerations that impact the release process. Most are associated with keeping the Design History File (DHF) documentation up-to-date. The organizational challenge in a FCT world is figuring out who is responsible for these tasks.

Spoiler alert: The suggested answers should be obvious, but many times the best I can do is just ask the question. Every organization, and even different teams within a single organization, will have different solutions. These can be tough problems to solve. Don't shoot the messenger!

Medical Device Data System (MDDS)

Not all of your software may be under FDA Class II/III regulatory controls. Some could fall under MDDS, see Identifying an MDDS. There is still some risk associated with MDDS but special controls and premarket approval -- the 510(k) -- are not necessary (see MDDS Rule).

MDDS software requires the same QMS documentation (see MDDS Section VI-E. Current Good Manufacturing Practices (CGMP)/QS Regulation/MDR Compliance of the rule) so most of the items listed here still apply.

Also, see Comment 25 from the rule which addresses "modular software". I.e mixing MDDS components with medical device components. The response says "The MDDS regulation does not necessarily prevent modular implementation.", but the FDA can't make a "generalized determination" on the various ways these combinations may be made. This may be a situation you run into and the FDA suggests it is best to contact them if you have questions.

Validation and Verification

General Principles of Software Validation; Final Guidance for Industry and FDA Staff (PDF)

Based on the intended use and the safety risk associated with the software to be developed, the software developer should determine the specific approach, the combination of techniques to be used, and the level of effort to be applied. While this guidance does not recommend any specific life cycle model or any specific technique or method, it does recommend that software validation and verification activities be conducted throughout the entire software life cycle.

FDA guidance documents, and FDA regulations in general (e.g. IEC 62304), tell you what to do, but leave the how up to the organization.  

Let's highlight the SRS* to System test verification from the V-model. This is essentially end-to-end testing. In a microservice-based architecture, each FCT is likely responsible for different sets of services. These services may be dependent on the services provided by other teams.

Which team is responsible for ensuring that the entire system is functioning properly (i.e. end-to-end test protocols and results) after changes are made to one or more of these services?

In an ideal world, these end-to-end tests are completely automated, but even then someone still needs to maintain them.

Validation testing (was the right product built?) presents even more challenges as a single FCT is may only be responsible for a small portion of the entire product.

Risk analysis

From Medical Device Design Risk Management Basic Principles:

Risk analysis is typically done by a cross-functional team that may span multiple business units, but it is probably not unreasonable for the FCT Product Owner to drive this process and get the documentation updated as needed.

Traceability

From the FDA guidance:

A source code traceability analysis is an important tool to verify that all code is linked to established specifications and established test procedures.

Creating this documentation is well suited for automation. It still requires ensuring that all requirements and related test scenarios are properly tagged so they can be parsed to produce a release report.

Software Design Evidence

From the FDA guidance:

The Quality System regulation requires that at least one formal design review be conducted during the device design process. However, it is recommended that multiple design reviews be conducted (e.g., at the end of each software life cycle activity, in preparation for proceeding to the next activity).

This is a challenge for any Agile-based development process so is not specific to the FCT-based organization. Running formal design reviews as early in the development process as possible should be a team responsibility.

Manual Approval Gates

For many unregulated software products continuous integration (CI) and continuous delivery (CD) is a reality. I.e. Code can be pushed, run through the CI/CD pipelines, and delivered to customers without human intervention.

It is very unlikely (not impossible though I suppose, depending on the product) that this would occur for FDA-regulated software. Even with automated document generation, software deployment to production will still require human sign-off steps and audit trails.

Off-The-Shelf (OTS) Software

OTS/SOUP Software Validation documentation needs to be kept up-to-date. This is mostly a book-keeping exercise for OTS/SOUP that is part of the software product. For tools though, see OTS/SOUP Software Validation Strategies.

Another consideration to keep in mind for including 3rd party software into your product is the software license. The corporate (legal) policy should dictate license requirements, but teams would be aided by an automated tracking process.

Infrastructure

Installation, operational, and performance qualification -- IQ/OQ/PQ. FDA regulated software must have these processes in place to ensure that after any changes are made, the infrastructure continues to meet quality requirements.  With the microservice architecture becoming a best practice, the team would now be responsible for documenting the IQ/OQ/PQ for their particular microservice or container flavor(s).

Cloud Offerings

Serverless architectures (Note: I'm most familiar with AWS, so will use their cloud products as examples. Azure and GCP have similar offerings.) One of the key advantages of the LambdaFargate, RDS, and similar managed/SaaS products is their undifferentiated heavy lifting.  AWS is responsible for the care and maintenance of the underlying infrastructure and servers. For on-prem servers, this is something the organization spends significant time and money on, but these expenditures do not directly benefit the customer.  Serverless allows companies to focus their efforts on things that make a difference to their customers.

How do you ensure IQ/OQ/PQ quality when you don't have control over the servers that are running your application(s)?

Another consideration: Teams will need to take regulatory impact into consideration when selecting new cloud technologies.

Infrastructure as Code (IaC)

The use of IaC (e.g. CloudFormation or Terraform) may require new release cycle processes. I.e since this code is not part of the application, you may want to have a separate release cycle for when the infrastructure is updated. The same is true for container (Docker) code updates.

The FCT should be responsible for the IaC associated with their product as it directly impacts both functionality and performance.

Transformation

When thinking about transforming to a FCT-based organization, the 2019 AWS re:invent keynote by Andy Jassy comes to mind. His "transformation" is referring to migrating from on-premise to cloud infrastructure (AWS, of course), but I think the non-technical transformation recommendations he outlines (start: 5:04 end 11:48) are also applicable to the FCT organizational change:

I think aggressive goals (item #2) is particularly important. Legacy organizations have a lot of inertia that needs to be overcome in order to move things forward. Breaking those initial barriers is even more difficult when you're having to deal with regulatory requirements.

Bottom Line

FDA regulatory requirements add tasks and documentation to the software release process. This has always been the case for medical device companies, but how this additional work is managed when trying to implement full-cycle teams can be a complicated problem to solve.

Just like unregulated development, providing the tooling to automate these tasks is the key to allow teams to deliver quality software to customers more quickly.

---------------

*SRS, Software Requirements Specification. The old-school water-fall requirements document. I don't miss those days!

VirtualBox 6.1.x Windows 10 2004 Upgrade Problem Resolution

Thursday, May 28th, 2020

This is just a quick note that will hopefully save someone time.

When I upgraded Windows 10 (64-bit) from 1909 to 2004 I found that Virtualbox 6.1.x no longer worked properly. All of my guest instances (Ubuntu, Mint, etc.) failed to start. Specifically, they just hung with a blinking cursor and there were no errors in the logs.

There were no reports on this problem on the VirtualBox on Windows Hosts forum. [30-May-2020 Update] See No VMs Work On Windows 10.

It turns out that when Windows 10 2004 is installed it enables the Windows Hypervisor Platform feature. Note that the Hyper-V feature was disabled prior to the upgrade and remained so after.

To check this setting run OptionalFeatures.exe from a Windows command shell. You'll see this:

The resolution to the hang problem is to disable this feature. Doing this is simple:

  1. Uncheck the Windows Hypervisor Platform checkbox (above).
  2. Reboot. Even though it's not indicated when you do step #1, a reboot is required to disable the feature.

That's it!

314 Digits of Pi (Python to Clojure)

Thursday, March 12th, 2020

Pi Day (3/14) is in a couple of days so I want to wish everyone a Happy Pi Day 2020! It's great to see that 55% of American's plan to celebrate and many will be eating pie or pi-themed food (whatever that is).  

My work colleague and basketball buddy Stan sells a nerd t-shirt here: 314 Digits of Pi.py. It has the Python code on the front and the results on the back. I "won" one of these at our annual White Elephant gift exchange in December. Even though the Amazon Best Sellers Rank is #12,306,667 in Clothing, Shoes & Jewelry, I really like it!

I've been staring at the code backward in the mirror for a number of months:

This got me wondering what this algorithm would look like in Clojure?

The first pass on the port was pretty straight forward, but I think it's worth noting some of the subtle differences. All of the code is here: pi-digits.

Here's the original Python code and result:

For the interested, here's an explanation of the calculation of Pi using fixed point math for speed improvements: Pi - Machin: The Machin formula, developed by John Machin in 1706 (!), is:

And here's a Clojure version that returns the identical result:

One problem with the arccot (arc-cotangent) implementation is that it just duplicates the Python logic and is not idiomatic Clojure. Instead of coding this in a non-functional style, i.e. using mutable state (atom), let's create a functional version:

We use loop/recur for a recursive implementation. This has the benefit of tail-call optimization (TCO). Here are the execution times (average of 10 runs) for calculating Pi with the three implementations:

MethodDigits
Time in seconds
10,00050,000100,000200,000
python0.1583.7414.960.3
clojure-while0.2605.1419.878.6
clojure-recur0.2525.1019.878.5

Python is certainly faster, but the purpose here was not to compare computation speed. It was to get a Clojure version of the t-shirt made! Who's the real nerd now? 🙂

Bioimpedance Analysis to Detect Sleep Apnea

Tuesday, February 11th, 2020

The company I worked for over 10 years ago, CardioDynamics*, manufactured an impedance cardiography (ICG) diagnostic device. The technology behind ICG and the Onera Bioimpedance Patch to Detect Sleep Apnea is called thoracic electrical bioimpedance (TEB).

It's no surprise that Onera has leveraged research on monitoring lung resistivity with this technology (e.g. here and here) and is applying AI for automated respiratory event detection. Since electrode placement is important for reliable data acquisition, the patch is a good design choice, but it doesn't look like it would be that comfortable to wear to sleep.

Another review, Wearable Patch Uses Machine Learning to Detect Sleep Apnea, notes that assessing sleep apnea requires additional physiological signals to be monitored and that more work needs to be done to combine this technology with these other signals.

………………………………………………

*Purchased by SonoSite in 2009. SonoSite has since stopped manufacturing the BioZ DX.

EEG Devices at CES 2020

Monday, January 27th, 2020

The Consumers Electronics Show (CES) 2020 was held earlier this month in Las Vegas. Of the ~4,400 exhibiting companies, the "Digital Health" category had 573 exhibitors. Of these, I found 9 companies utilizing EEG technology.

Besides the usual sleep aid applications, there are still a lot of focus/meditation/relaxation apps. Mental health screeners seem to be a new trend. Other than the visual cortex monitor (NextMind), BCI devices for use as video game controllers have cooled down.

Here's a quick summary, including the application categories they fall into.

Product Link DescriptionSleepFocusBCIMental Health
URGONightEEG-feedback therapy for sleep.X
NextMindWorn on the back of your head to monitor EEG from the brain’s visual cortex. This provides unique BCI capabilities.X
Muse SMulti-sensor meditation device that provides real-time feedback on your brain activity, heart rate, breathing, and body movements to help you build a consistent meditation practice.X
BrainUp Brain state real-time monitoring / brain wave deep sleep-aid wearable/ brain and mental health screening.XX
BrainCoHelps train your brain to perform at your best in everything you do.X
Entertech brainwaveDedicated to mental health applications and products around spiritual health like sleep, meditation, and relaxation etc.XXX
HealiumHealium is the world's first virtual and augmented reality platform powered by brainwaves (uses the Muse 1 headband) and heart rate via consumer wearables.X
HippoScreenSEA (Stress EEG Assessment ) System provides an objective indicator of mental health to save doctor’s time on interview.X
iBand PlusLearns about your sleep cycle and intelligently adjusts the audio-visual signals to induce lucid dreams, make you fall asleep easily and wake up naturally.X

Exploring Clojure (and FP vs OOP)

Sunday, January 27th, 2019

I've always viewed functional programming (FP) from afar, mostly because object-oriented programming (OOP) is the dominant development methodology I've been using (Java, Ruby, C#, etc.) for many years.  A majority of the articles I've read on FP have statements like this:

If you’ve never tried functional programming development, I assure you that this is one of the best time investments you can make. You will not only learn a new programming language, but also a completely new way of thinking. A completely different paradigm.

Switching from OOP to Functional Programming gives an overview of the differences between FP and OOP. It uses Scala and Haskell for the FP example code, but I think it still does a good job of getting the major concepts across:

I do not think that FP, or any single paradigm/language/framework/etc. for that matter, is a silver bullet. On the contrary, I'm a true believer in the "right tool for the job" philosophy.  This is particularly true in the software industry where there is such a wide variety of problems that need to be solved.

This view for programming paradigms is cute but is actually misleading:

As a developer, it's important to always to be learning new problem-solving approaches. I.e. adding new tools to your tool-belt. This will not only allow you to select the best solution(s) for the job, but you'll be better able to recognize the trade-offs and potential problem areas that might arise with any solution. I think understanding FP concepts will make you a better programmer, but not necessarily because you are using FP techniques and tools.

Functional Programming Is Not a Silver Bullet sums it up best:

Don’t be tricked into thinking that functional programming, or any other popular paradigm coming before or after it, will take care of thinking about good code design instead of us.

The purpose of this article is to present my own experiences in trying to use Clojure/FP as an alternative approach to traditional OOP. I do not believe there is anything here that has not already been covered by many others, but I hope another perspective will be helpful.

Lisp

I chose a Lisp dialect for this exploration for several reasons:

  1. I have some Lisp experience from previous projects and was always impressed with its simplicity and elegance. I really wanted to dig deeper into its capabilities, particularly macros (code as data - programs that write programs). See Lisp, Smalltalk, and the Power of Symmetry for a good discussion of both topics.
  2. I'm also a long-time Emacs user (mostly for org-mode) so I'm already comfortable with Lisp.
  3. This profound advice:

For all you non-Lisp programmers out there, try LISP at least once before you die. You will not regret it.

Lisp has a long and storied history (second oldest computer language behind Fortran). Here's a good Lisp historical read: How Lisp Became God's Own Programming Language.

I investigated a variety of Lisp dialects (Racket, Common Lisp, etc.) but decided on Clojure primarily because it has both JVM and Javascript (ClojureScript) support. This would allow me more opportunity to use it for real-world projects. This is also why I did not consider alternative FP languages like Haskell and Erlang.

Lastly, the obligatory XKCD cartoon that makes fun of Lisp parentheses (which I address below):

Why Clojure?

Why Clojure? I’ll tell you why… provides a good summary and references:

  1. Simplicity
  2. Java Interoperability
  3. REPL (Read Eval Print Loop)
  4. Macro
  5. Concurrency
  6. ClojureScript
  7. Community

Here are some more reasons from Clojure - the perfect language to expand your brain?

  1. Pragmatism (videos at Clojure TV)
  2. Data processing: Sequences and laziness
  3. Built-in support for concurrency and parallelism

There are also many good articles that describe the benefits of FP (immutable objects, pure functions, etc.).

Clojure (and FP) enthusiasts claim that their productivity is increased because of these attributes. I've seen this stated elsewhere, but from the article above:

Clojure completely changed my perspective on programming. I found myself as a more productive, faster and more motivated developer than I was before.

It also has high praise from high places:

Who doesn't want to hang out with the smart kids?

Clojure development notes

The following are some Clojure development observations based on both the JSON Processor (see below) and a few other small ClojureScript projects I've worked on.

The REPL

The read-evaluate-print-loop, and more specifically the networked REPL (nREPL) and its integration with Emacs clojure-mode/cider is a real game-changer. Java has nothing like it (well, except for the Java 9 JShell, which nobody knows about) and even Ruby's IRB ("interactive ruby") is no match.

Being able to directly evaluate expressions without having to compile and run a debugger is a significant development time-saver. This is a particularly effective tool when you are writing tests.

Parentheses

A lot a people (like XKCD above) make fun of the Lisp parentheses. I think there are two considerations here:

  1. Keeping parentheses matched while editing. In Clojure, this also includes {} and []. Using a good editor is key - see Top 5 IDEs and text editors for Clojure. For me, Emacs smartparens in strict mode (i.e. don't allow mismatches at all), plus some wrap/unwrap and slurp/barf keyboard bindings, all but solved this issue.
  2. When reading Lisp code, I think parentheses get a bad rap. IMO, the confusion has more to do with the difference in basic Lisp flow control syntax than stacked parentheses.  Here's a comparison of Ruby and Lisp  if  syntax:

Once you get used to the differences, code is code. More relevantly, bad code is bad code no matter what language it is. This is important. Here's a good read on the subject: Effective Mental Models for Code and Systems ("the best code is like a good piece of writing").

Project Management

Leiningen ("automating Clojure projects without setting your hair on fire") is essentially like Java's Maven, Ruby's Rake, or Python's pip. It provides dependency management, plug-in support, applications/test runners, customized tasks, etc.

Coming from a Maven background, lein configuration management and usage made perfect sense. The best thing I can say is that it always got the job done, and even more important, it never got in the way!

Getting Answers

I found the documentation (ClojureDocs) to be very good for two reasons:

  1. Every function page has multiple examples, and some are quite extensive.  You typically only need to see one or two good examples to understand how to use a function for your purposes. Having to read the actual function description is rarely needed.
  2. Related functions. The "SEE ALSO" section provides links to functions that can usually improve your code: if  → if-not, if-let, when,... This is very helpful when you're learning a new language.

I lurked around some of the community sites (below). The threads I read were respectful and members seemed eager to help.

Clojure Report Card

Language: A

On the whole, I was very pleased with the development experience. Solving problems with Clojure really didn't seem that much different from other languages. The extensive core language capabilities along with the robust ecosystem of libraries (The Clojure Toolbox) makes Clojure a pleasure to use.

I see a lot of potential for practical uses of Clojure technologies. For example, Clojurified Electron plus reagent-forms allowed me to build a cross-platform Electron desktop form application in just a couple of days.

I was only able to explore the tip of the Clojure iceberg. Based on this initial experience, I'm really looking forward to utilizing more of the language capabilities in the future.

FG vs OOP: B

My expectations for FP did not live up to what I was able to experience in this brief exploration. The lower grade reflects the fact that the Clojure projects I've worked on were not big enough to really take advantage of the benefits of FP described above.

This is a cautionary tale for selecting any technology to solve a problem. Even though you might choose a language/framework that advertises particular benefits (FP in this case), it doesn't necessarily mean that you'll be able to take advantage of those benefits.

This also highlights the silver bullet vs good design mindset mentioned earlier. To be honest, I somehow thought that Clojure/FP would magically solve problems for me. Of course, I was wrong!

I'm sure this grade will improve for future projects!

Macros: INC (incomplete)

I was also not able to fully exercise the use of macros like I wanted to. This was also related to the nature of the projects.  I normally do DSL work with Ruby, but next time I'll be sure to try Clojure instead.

TL;DR

The rest of this article digs a little deeper into the differences between the Ruby and Clojure implementations of the JSON Processor project described below.

At the end of the day, the project includes two implementations of close to identical functionality that can be used for comparison. Both have:

    1. Simple command line parsing and validation
    2. File input/output
    3. Content caching (memoization)
    4. JSON parser and printer
    5. Recursive object (hash/map) traversal

The Ruby version (~57 lines) is about half the size of Clojure (~110 lines). This is a small example, but it does point out that Ruby is a simpler language and that there is some overhead with the Clojure/FP programming style (see Pure Functions, below).

JSON Processor

The best way to learn a new language is to try to do something useful with it. I had a relatively simple Ruby script for processing JSON files. Reproducing its functionality in Clojure was my way of experiencing the Clojure/FP approach.

The project is here:   json-processor

The Ruby version is in the ./ruby, while the Clojure version is in ./src/json_processor. See the README.md file for command line usage.

The processor is designed to simply detect a JSON key that begins with "include" and replace the include key/value pair with the contents of a file's (./<current_path>/value.json) top-level object.  Having this include capability allows reuse of JSON objects and can improve management of large JSON files.

So, if two files exist:

and base.json contains:

And level1.json contains:

After running base.json through the processor, the contents of the level1 object in the level1.json file will replace "include":"level1", with the result being:

Also, included files can contain other include files, so the implementation is a good example of a recursive algorithm.

There are example files in the ./test/resources directory that are slightly more complex and are used for the testing.

Development Environment

Immutability

Here's the Ruby recursive method:

The passed-in object is purposely modified. The Ruby each function is used to iterate over each key/value pair and replace the included content as needed. It deletes the "include" key/value pair and adds the JSON file content in its place. Again, the returned object is a modified version of the object passed to the function. 

The immutability of Clojure objects and use of reduce-kv means that an all key/value pairs need to be added to the 'init' ( m) collection ( (assoc m k v) ). This was not necessary for the Ruby implementation.

A similar comparison, but with more complexity and detailed analysis, can be found here: FP vs. OO List Processing.

Pure Functions

You'll notice in the Ruby code that the class variable @dir_name, which is created in the constructor, is used to create the JSON file path:

get_json_content(File.join(@dir_name,v)).values.first

The Clojure code has no class variables, so base_dir must be passed to all methods, like in the process-json-file  macro:

`(process-json ~base-dir (get-json ~base-dir ~file-name))))

To an OOP developer, having base_dir as a parameter in every function definition may seem redundant and wasteful. The Functional point-of-view is that:

  1. Having mutable data (@dir_name) can be the source of unintended behaviors and bugs.
  2. Pure functions will always produce the same result and have no side effects, no matter what the state of the application is.

These attributes improve reliability and allow more flexibility for future changes. This is one of the promises of FP.

Final Thought

I highly recommend giving Clojure a try!

Bad joke:

I have slurped the Clojure Kool-Aid and can now only spit good things about it.

Sorry about that. 🙂

UPDATE (22-Aug-19): More Clojure love from @unclebobmartin: Why Clojure?