Below is a copy of the SDK that I received in April 2015. I successfully built and ran the AndroidMelonBasicSample application on my Motorola phone. It actually communicated with the Melon headband!
Melon was purchased by DAQRI in February 2015. They still maintain a Melon product page, but the Google+ Melon Headband - Android Users community (see update below) has been all but silent for over 6 months. That plus the website message "We're back in the lab crafting new things" is a good indication that Melon development is no longer active.
I recently attended a Deep Learning (DL) meetup hosted by Nervana Systems. Deep learning is essentially a technique that allows machines to interpret sensory data. DL attempts to classify unstructured data (e.g. images or speech) by mimicking the way the brain does so with the use of artificial neural networks (ANN).
DL is a branch of machine learning based on a set of algorithms that attempt to model high-level abstractions in data by using multiple processing layers with complex structures,
Deep learning involves training a computer to recognize often complex and abstract patterns by feeding large amounts of data through successive networks of artificial neurons, and refining the way those networks respond to the input.
This article also presents some of the DL challenges and the importance of its integration with other AI technologies.
From a programming perspective constructing, training, and testing DL systems starts with assembling ANN layers.
For example, categorization of images is typically done with Convolution Neural Networks (CNNs, see Introduction to Convolution Neural Networks). The general approach is shown here:
Construction of a similar network using the neon framework looks something like this:
Properly training an ANN involves processing very large quantities of data. Because of this, most frameworks (see below) utilize GPU hardware acceleration. Most use the NVIDIA CUDA Toolkit.
Each application of DL (e.g. image classification, speech recognition, video parsing, big data, etc.) have their own idiosyncrasies that are the subject of extensive research at many universities. And of course large companies are leveraging machine intelligence for commercial purposes (Siri, Cortana, self-driving cars).
Relative to the size of a standard Ubuntu Docker image I thought the 250MB CoreOS image was "lean". Earlier this month I went to a Docker talk by Brian DeHamer and learned that there are much smaller Linux base images available on DockerHub. In particular, he mentioned Alpine which is only 5MB and includes a package manager.
Here are the instructions for building the same Apache server image from the previous post with Alpine.
The Dockerfile has significant changes:
Dockerfile
1
2
3
4
5
6
7
8
# Build myapp server Docker container
FROM alpine:latest
MAINTAINER MyName
RUN apk--update add apache2
RUN rm-rf/var/cache/apk/*
ENTRYPOINT["httpd"]
CMD["-D","FOREGROUND"]
COPY dist/var/www/localhost/htdocs
Explanation of differences:
line 2: The base image is alpine:latest.
lines 4-5: Unlike the CoreOS image, the base Apline image does not include Apache. These lines use the apk package manager to install Apache2 and clean up after.
lines 6-7: Runs the exec form of the Dockerfile ENTRYPOINT command. This will run httpd in the background when the image is started.
line 8: The static web content is copied to a different directory.
Building and pushing the image to DockerHub is the same as before:
Shell
1
2
$sudo docker build-tdockeruser/myapp.# This will create a 'latest' version.
$sudo docker push dockeruser/myapp
Because of the exec additions to the Dockerfile, the command line for starting the Docker image is simpler:
Shell
1
2
# Instead of 9001, use 80 or 8080 if you want to provide external access to the application
...little hope for open standards or a universal language for how they do that. It's time for regulatory guidance to make that happen.
...one analyst observed that the industry seems to be forming "walled gardens" rather than a coherent network that encourages openness and interoperability.
Sound familiar? This is the same medical device interoperability struggle that has been going on for over 25 years. The IoT is still in its infancy and I sure hope they have better luck developing a "common carrier" than we did.
The Android application became available yesterday.
I'm having trouble focusing (according to the Melon anyway), but at least the device and software seem to be functioning. As expected, the software needs a lot of work. No use in bashing beta software though.
I just downloaded the alpha SDK. Now the real fun begins...
Many dating services ask countless questions. With EEG matching, there should be no need for the questions that most people shade the truth with.
I have no idea what this 'Color Spectrum Analysis of EEG Data' (from Biometric Dating) is, but it's sure pretty:
Granted, they are in the process of testing their theory by using data from long-term married couples. I sure hope they're using happily married couples, otherwise the consequences could be disastrous!
I'm sure 'the kid in the garage without a degree' is no dummy, but this premise:
And so that large percentage of medicine that is effectively being practiced by non-MDs is going to expand.
is simply ludicrous.
There's a big difference between creating health and wellness appliances and mobile applications and diagnosing and treating patients. The distinction is outlined in FDA clarifies the line between wellness and regulated medical devices. If you claim your product acts like a doctor (treat or diagnose) or doesn't fall into the "low risk" category, then your company will have to follow FDA regulatory controls.
The HL7 FHIR (pronounced “fire”) standard has been under development for a while. It became a Draft Standard for Trial Use (see DSTU Considerations) in Jan 2014. The recent announcement of the vendor collaboration Argonaut Project has fueled some "interoperability excitement"™.
The best technical overview I've read is this whitepaper: The HL7 Games Catching FHIR. In particular, it does a good job of comparing FHIR with HL7 v3. Summay:
HL7’s FHIR™ standard has learned from the mistakes of HL7 v3, and is surprisingly delightful.
The most common use for JavaScript frameworks is to provide dynamic client-side user interface functionality for a web site. There are situations where a JS application does not require any services from its host server (see example unhosted apps). One of the challenges for this type of application is how to distribute it to end users.
This post will walk through creating a static AngularJS application (i.e. no back-end server) and how to create and publish a lean Docker container that serves the application content. I will mention some tooling but discussion of setting up a JS development environment is beyond the scope of this article. There are many resources that cover those topics.
Also note that even though I'm using AngularJS, any static web content can be distributed with this method.
Side Note on AngularJS
One of the major advantages of using AngularJS over the many JavaScript framework alternatives is its overwhelming popularity. Any question or issue you may encounter will typically be answered with a simple search (or two).
The easiest way to create a full-featured Angular application is with Yeoman. Yeoman is a Node.js module (npm) and along with its Angular generator creates a project that includes all of the build and test tools you'll need to maintain an application.
Shell
1
2
$npm install-gyo
$npm install-ggenerator-angular
Generate the Angular application with yo. Accepting all the defaults will include "Bootstrap and some AngularJS recommended modules." There's probably more functionality included then you'll need, but modules can be removed later.
Shell
1
2
3
$mdir myapp
$cdmyapp
$yo angular myapp
The yo command will take a little while to complete because it has to download Angular and all of the modules and dependencies.
Start the server with Grunt (which needs to be installed separately).
Shell
1
$grunt serve
The application can be viewed in a browser at http://localhost:9000/#/:
Building the Application Distribution
After removing the 'Allo, Allo' cruft and creating your custom application create a distribution with:
Shell
1
$grunt# You my need to add --force if jshint causes failures.
This will create a dist directory that contains the static application content.
A typical Ubuntu Docker container requires more than a 1GB download. A leaner Linux distribution is CoreOs. The coreos/apache container has a standard Apache server and is only ~250MB.
Add a Dockerfile file to the myapp directory:
Dockerfile
1
2
3
4
# Build myapp server Docker container
FROM coreos/apache
MAINTAINER MyName
COPY dist/var/www/
The key here is the COPY command which copies the content of the dist directory to the container /var/www directory. This is where the Apache server will find index.html and serve it on port 80 by default. No additional Apache configuration is required.
Create the docker container:
Shell
1
$sudo docker build-tdockeruser/myapp.# This will create a 'latest' version.
Output:
Now push the container to your Docker hub account:
Shell
1
$sudo docker push dockeruser/myapp
The dockeruser/myapp Docker container is now available for anyone to pull and run on their local machine or a shared server.
Starting the Application with Docker
The application can be started on a client system by downloading the running the dockeruser/myapp container.
Shell
1
2
# Instead of 9001, use 80 or 8080 if you want to provide external access to the application
The run command will download the container and dependencies if needed. The -d option runs the Docker process in the background while apache2ctrl is run in the container in the foreground. The application will be running on http://localhost:9001/#/.
To inspect the Apache2 logs on the running Docker instance:
Shell
1
2
3
4
5
$sudo docker exec-it my-app/bin/bash
root@bfba299706ad:/# ls /var/log/apache2/
access.logerror.logother_vhosts_access.log
root@bfba299706ad:/# exit # Exit the bash shell and return to host system
$
To stop the server:
Shell
1
$sudo docker stop my-app
If you've pushed a new version of the application to Docker hub, users can update their local version with:
Shell
1
$sudo docker pull dockeruser/myapp
This example shows how Docker containers can provide a consistent distribution medium for delivering applications and components.