Archive for the ‘Google’ Category

The Problem With Google And Why You Should Care

Saturday, March 3rd, 2018

When I read The Case Against Google in The New York Times last week it was with a typical technology interest eye. It was like reading the local paper about hit-and-runs, robberies, or the latest political scandal. Somewhat interesting, but it really doesn't affect me (thankfully). Or so I thought.

Then Medgadget published Our Case Against Google, which is a comprehensive (and damning) indictment of Google and the "GoogleFacebook duopoly". Their bottom line:

Google is an evil monopoly.

This is not a new red flag. Even nine years ago there were concerns: Is Google a Monopoly? Just ask Stack Overflow (and me). Note that this site's Google search traffic in 2009 was 95.9%. Now it's 98.4%, mostly because there are fewer search engine competitors around today.

Here's an overly simplistic summary of the effects of these monopolistic behaviors:

  1. It kills innovation. As the Raffs journey shows, superior technology can be easily crushed.
  2. It kills high-quality content, which is well-documented in the Medgadget article.

Companies trying to innovate or content providers that are dependent on ad revenue for survival are, of course, directly affected by this. But I'm not either of those, so how does this affect me?

I'm an Android/Gmail/Google Docs&Maps person (i.e. no Apple here). I take it for granted that all of these wonderful Google-supplied technologies and conveniences are free. Google funds these goodies through their anti-competitive tactics and biased search algorithms. Does this mean that I'm benefiting from Google's bad behavior?  No duh!

So the logical conclusion is that my Google freebies aren't free after all.

Technology innovation and high-quality content are also things that I take for granted. But in reality, these are being sacrificed and are the actual cost. The struggles (and potential failure) of companies like Foundem and Medgadget is a very high price to pay, and it's happening all the time as a result of Google's behavior.

Why you should care: Monopolistic behavior carries this high price for all of us. This is true no matter what technology you use.

Modern-day technology anti-trust litigation (including the 1998 Microsoft case) involve complex legal/business/technology issues that are well worth becoming educated about.

Unfortunately, battling 800-pound Gorillas is a difficult business.  Asking this small med-tech community to raise awareness wherever possible is the least we can do.

Thanks for reading!

Update (3/22/18): Google and Facebook can’t help publishers because they’re built to defeat publishers

 

Google Health: R.I.P.

Friday, June 24th, 2011

The announcement that Google Health was being discontinued shouldn't be a surprise.  In March the Wall Street Journal reported that once Larry Page took over the CEO role at Google he would be looking to cut projects:

Some managers believe Mr. Page will eliminate or downgrade projects he doesn't believe are worthwhile, freeing up employees to work on more important initiatives, these people said. One project expected to get less support is Google Health, which lets people store medical records and other health data on Google's servers, said people familiar with the matter.

There have been a number of retrospectives written today already, most concerned with the future of PHRs. For example:

It just goes to show you that being a pioneer does not guarantee long-term success. Microsoft HealthVault has done a much better job with device integration than Google Health did. There are many other factors that will determine the viability of PHRs in general though.  Adoption by the general population and a revenue model to support growth are just a few.

UPDATE (6/25/11): Mr. HIStalk's take is the lead in Monday Morning Update 6/27/11. Two quotable statements:

Why did Google Health fail? Simple and obvious: consumer demand for personal health records is close to zero, which has always been the case and probably always will be.

Probably true.

Google predictably did what its know-it-all technology company predecessors have done over the years: dipped an arrogant and half-assed toe into the health IT waters; roused a loud rabble of shrieking fanboy bloggers and reporters...

OK, but how do you really feel?

Seriously though, I think Google's foray in the Healthcare space was no different then how they approach any other market: "If we build it, they will come."  If they don't come, we pull the plug. Google has a graveyard full of products that suffered the same fate.

UPDATE (6/26/11): Some more:

UPDATE (7/1/11):

A Medical Device Gateway Data Standard?

Wednesday, November 18th, 2009

The Wipro OEM medical device gateway press release makes it all seem so easy (my highlight):

The device, consisting of interfaces that can feed-in data such as blood pressure, pulse rate, ECG reading and weight from the respective devices, is connected to the gateway that would format it into standard patient information and transmit it to either public health data platform such as Google Health or to private platforms like Microsoft Health Vault.

What exactly is "standard patient information"?  Maybe they've finally developed the magic interoperability bullet.  Yeah, right!  I'm sure companies like Capsule see these kind of claims all the time.  Statements like these are unfortunate because they give the impression that health data interoperability is a given. Of course we know that is not the case.

Also, since when is Google Health a public health data platform?

Hat tip: Avantrasara

UPDATE (11/19/09):  Wipro ties up with Intel for rural medical solutions

Access to Medical Data: Are PC Standards and PHRs (You) the Answer?

Tuesday, September 22nd, 2009

Dana Blankenhorn's article Give medicine access to PC standards makes some good points about the medical device industry but (IMHO) misses the mark when trying to use PC standards and PHRs as models for working towards a solution.

I'll get back to his central points in a minute. One thing I find fascinating is the knee-jerk reaction in the comments to even a hint of government control.  How on earth can someone jump from "industry standard" to a "march towards socialism"? We saw the same thing at this summer's town hall meetings and in Washington a couple of weeks ago.  The whole health care debate is just mind boggling!

Anyway, let's focus on the major points of the article. First:

Every industry, as its use of computing matures, eventually moves toward industry standards. It happened in law, it happened in manufacturing, it happened in publishing.

It has not happened, yet, in medicine.

Very true.  In the medical device world, connectivity and interoperability are hot topics. A couple of recent posts -- Plug-and-Play Medicine and Medical Device Software on Shared Computers -- point out the significant challenges in this area.  In particular, the development and adoption of standards is a very intensive and political process. But where's the incentive for the industry to go through this? Dana's comment addresses this (my emphasis):

The role I like best for government is in directing market incentives toward solutions, and not just to monopolies or bigger problems.

The reason health care costs jump every year is because market incentives cause them to. Those incentives must be changed, but the market won't by itself because the market profits from them.

Only government can transform incentives. ...

Like it or not, this may to the only way to push the medical industry to do the right thing.  But those other industries didn't need government intervention in order to create their standards.  Using PC (or other industry) standards as a model for facilitating medical data access just doesn't work.  The health industry will have to dragged to the table kicking and screaming, and the carrot (or stick) will have to be large in order for them to come to a consensus.

Second, I don't see the relationship between the use of PHRs and the promotion of standards.

By supporting PHRs, you support your right to your own data. You support liberating data from proprietary systems and placing it under industry standards.  You support integrating your health with the world of the Web, and the benefits such industry standards can deliver to you.

Taking responsibility for your own health data is great, but both Microsoft HealthVault and Google Health are proprietary systems.  Just because your data is on the Web doesn't make it any more accessible.  And even if one of these PHRs did became an industry standard, it would have very little impact on how EMRs communicate with each other or medical devices in general.

There are no easy answers.

Is Google a Monopoly? Just ask Stack Overflow (and me).

Sunday, February 22nd, 2009

Today's New York Times Digital Domain: Everyone Loves Google, Unitl It's Too Big quotes Jeff Atwood, probably based on this post: The Elephant in the Room: Google Monoculture.

It's interesting that they picked Stack Overflow as an example because even Jeff says:

Now, I don't claim that Stack Overflow is representative of every site on the internet -- obviously it isn't.

I don't know Jeff, I think you're being too modest. This blog doesn't have near the number of visits that SO does, but 95.87% of the search traffic for the last month was  from Google.  Based on an N of 2 then, I'd say that Google does have a monopoly on Internet searching!

UPDATE (3/5/09):

Is Google an Orwellian nightmare? Yes, Google Is Getting Too Big For Its Britches - Case In Point: Google Health. I'm not so sure. Linking Google's search dominance and the intended use of Google Health in some sort of surveillance conspiracy is a bit of a stretch.  If they were related, it would probably just be a clever way to increase ad revenue.  It is interesting that many people have a Big Brother fear reaction to the collection of any personal information. Personally BB doesn't worry me nearly as much as all the little thieves out there that would steal my information for their own benefit, at my expense.

Exploring Cloud Computing Development

Saturday, February 7th, 2009

Cloud ComputingIt's not easy getting your arms around this one. The term Cloud Computing has become a catch-all for a number of related technologies that have been used in enterprise-class systems for many years (e.g. grid computing, SOA, virtualization, etc.).

One of the primary concerns of cloud computing in Healthcare IT is privacy and security.  A majority of the content and comments in just about every article or blog post about CC, re: health data or not, deal with these concerns. I'm going to save that discussion for a future post.

I'm also not going to dig into the multitude of business and technical trade-offs of  these "cloud" options versus more traditional SaaS and other hybrid server approaches.  People write books about this stuff and there's a flood of Internet content that slice and dice these subjects to death.

My purpose here is to provide an overview of cloud computing from a developers point-of-view so we can begin to understand what it would take to implement custom software in the cloud.  All of the major technical aspects are well covered elsewhere and I'm not going to repeat them here. I'm just going to note the things that I think were important to take into consideration when looking at each option.

Here's a simplified definition of Cloud Computing that's easy to understand and will get us started:

Cloud computing is using the internet to access someone else's software running on someone else's hardware in someone else's data center while paying only for what you use.

As a consumer, for example of a social networking site or PHR lets say, this definition fits pretty well.  There's even an EMR that is  implemented in the cloud, Practice Fusion, that would fit this definition.

As a developer though,  I want it to be my software running in the cloud so I can make use of someone else's infrastructure in a cost effective manner.  There are currently three major CC options.  Cloud Options - Amazon, Google, & Microsoft gives a good overview of these.

The Amazon and Google diagrams below were derived from here.

Amazon Web Services

Amazon Cloud Services

The Amazon development model involves building Zen virtual machine images that are run in the cloud by EC2. That means you build your own Linux/Unix or Windows operating system image and upload it to be  run in EC2. AWS has many pre-configured images that you can start with and customize to your needs. There are web service APIs (via WSDL) for the additional support services like S3, SimpleDB, and SQS.  Because you are building self-contained OS images, you are responsible for your own development and deployment tools.

AWS is the most mature of the CC options.  Applications that require the processing of huge amounts of data can make effective you of the AWS on-demand EC2 instances which are managed by Hadoop.

If you have previous virtual machine experience (e.g. with  Microsoft Virtual PC 2007 or VirtualBox) one of the main differences working with EC2 images is that they do not provide persistent storage. The EC2 instances have anywhere from 160 GB to 1.7 TB of attached storage but it disappears as soon as the instance is shut down. If you want to save data you have to use S3, SimpleDB, or your own remote storage server.

It seems to me that having to manage OS images along with applications development could be burdensome.  On the other hand, having complete control over your operating environment gives you maximum flexibility.

A good example of using AWS is here: How We Built a Web Hosting Infrastructure on EC2.

Google AppEngine

Google App Engine

GAE allows you to run Python/Django web applications in the cloud.  Google provides a set of development tools for this purpose. i.e. You can develop your application within the GAE run-time environment on our local system and deploy it after it's been debugged and working the way you want it.

Google provides entity-based SQL-like (GQL) back-end data storage on their scalable infrastructure (BigTable) that will support very large data sets. Integration with Google Accounts allows for simplified user authentication.

From the GAE web site:  "This is a preview release of Google App Engine. For now, applications are restricted to the free quota limits."

Microsoft Windows Azure

Microsoft Windows Azure

Azure is essentially a Windows OS running in the cloud.  You are effectively uploading and running  your ASP.NET (IIS7) or .NET (3.5) application.  Microsoft provides tight integration of Azure development directly into Visual Studio 2008.

For enterprise Microsoft developers the .NET Services and SQL Data Services (SDS) will make Azure a very attractive option.  The Live Framework provides a resource model that includes access to the Microsoft Live Mesh services.

Bottom line for Azure: If you're already a .NET programmer, Microsoft is creating a very comfortable path for you to migrate to their cloud.

Azure is now in CTP and is expected to be released later this year.

UPDATE (4/27/09) Here's a good Azure article:  Patterns For High Availability, Scalability, And Computing Power With Windows Azure.

Getting Started

All three companies make it pretty easy to get software up and running in the cloud. The documentation is generally good, and each has a quick start tutorial to get you going. I tried out the Google App Engine tutorial and had Bob in the Clouds on their server in about 30 minutes.

Bob's Guest Book

Stop by and sign my cloud guest book!

Misc. Notes:

  • All three systems have Web portal tools for managing and monitoring uploaded applications.
  • The Dr. Dobbs article Computing in the Clouds has a more detailed look at AWS and GAE development.

Which is Best for You?

One of the first things that struck me about these options is how different they all are.  Because of this, from a developer's point-of-view I think you'll quickly have a gut feeling about which one best matches your current skill sets and project requirements. The development components are just one piece of the selection process puzzle though. Which one you actually might end up using (it could very well be none) will also be based on all your other technical and business needs.

UPDATE (6/23/09): Here's a good high level cloud computing discussion: Reflections on Executive Briefing Event: Cloud & RIA.  I like the phrase "Cloud Computing is Elastic" because it captures most the appealing aspects of the technology.  It's no wonder Amazon latched on to that one -- EC2.

Dreaming of Flexible, Simple, Sloppy, Tolerant in Healthcare IT

Saturday, January 3rd, 2009

I was recently browsing in the computer (nerd) section of the bookstore and ran across an old Joel Spolsky edited book: The Best Software Writing I.  Even though it's been about four years, good writing is good writing, and there is still a lot of insightful material there.

One of the pieces that struck a cord for me was Adam Bosworth's ISCOC04 Talk (fortunately posted on his Weblog).  He was promoting the use of simple user and programmer models (KISS -- simple and sloppy for him) over complex ones for Internet development.  I think his jeremiad is just as relevant to the current state of  EMR and interoperability.  Please read the whole thing, but for me the statement that get's to the point is this:

That software which is flexible, simple, sloppy, tolerant, and altogether forgiving of human foibles and weaknesses turns out to be actually the most steel cored, able to survive and grow while that software which is demanding, abstract, rich but systematized, turns out to collapse in on itself in a slow and grim implosion.

Why is it that when I read "demanding, abstract, rich but systematized" the first thing that comes to mind is HL7?  I'm not suggesting that some sort of open ad hoc system is the solution to The EMR-Medical Devices Mess.  But it's painfully obvious that what has been built so far closely resemble "great creaking rotten oak trees".

The challenge for the future of Healthcare interoperability is really no different than that of the Internet as a whole (emphasis mine):

It is in the content and the software's ability to find and filter content and in the software's ability to enable people to collaborate and communicate about content (and each other).

I would contend that the same is true for medical device interoperability. Rigid (and often times proprietary) systems are what keep devices from being able to communicate with one another.  IHE has created a process to try to bridge this gap, but the complexity of becoming a member, creating an IHE profile, and having it certified is a also a significant barrier.

Understanding how and why some software systems have grown and succeeded while others have failed may give us some insights. Flexible, Simple, Sloppy, Tolerant may be a dream, but it also might not be a bad place to start looking for future innovations.

Adam also had this vision while he was at Google: Thoughts on health care, continued (see the speech pdf):

... we have heard people say that it is too hard to build consistent standards and to define interoperable ways to move the information. It is not! ... When we all make this vision real for health care, suddenly everyone will figure out how to deliver the information about medicines and prescriptions, about labs, about EKGs and CAT scans, and about diagnoses in ways that are standard enough to work.

Also see the Bosworth AMIA May07 Speech (pdf) for how this vision evolved, at least for Google's PHR.

UPDATE (2/9/09): Here's a  related article: The Truth About Health IT Standards – There’s No Good Reason to Delay Data Liquidity and Information Sharing that furthers this vision:

We don’t have to wait for new standards to make data accessible—we can do a ton now without standards.  What we need more than anything else is for people to demand that their personal health data are separated from the software applications that are used to collect and store the data.

UPDATE (4/17/09): John Zaleski’s Medical Device Open Source Frameworks post is also related.

Use of an open-source framework approach is probably as good as any. As a management model, I don’t see it as being that much different from the way traditional standards have been developed. Open-source just provides a more ad-hoc method for building consensus. Less bureaucracy is a good thing though. It may also allow for the introduction and sharing of more innovative solutions. In any case, I like visions.

USB plug-n-play (plug-n-pray to some) may be a reasonable connectivity goal, but it does not deal at all with system interoperability. Sure, you can connect a device to one or more monolithic (and stable) operating systems, but what about the plethora of applications software and other devices?  This just emphasizes the need to get out of the “data port” (and even “device driver”) mind-set when envisioning communication requirements and solutions.

Interoperability: Google Protocol Buffers vs. XML

Monday, July 14th, 2008

Google recently open sourced Protocol Buffers: Google's Data Interchange Format (documentation, code download). What are Protocol Buffers?

Protocol buffers are a flexible, efficient, automated mechanism for serializing structured data – think XML, but smaller, faster, and simpler.

The documentation is complete and worth a quick read through. A complete analysis of PB vs. XML can be found here:  So You Say You Want to Kill XML.....

As discussed, one of the biggest drawbacks for us .NET developers is that there is no support for the  .NET platform. That aside, all of the issues examined are at the crux of why interoperability is so difficult. Here are some key points from the Neward post:

  1. The advantage to the XML approach, of course, is that it provides a degree of flexibility; the advantage of the Protocol Buffer approach is that the code to produce and consume the elements can be much simpler, and therefore, faster.
  2. The Protocol Buffer scheme assumes working with a stream-based (which usually means file-based) storage style for when Protocol Buffers are used as a storage mechanism. ... This gets us into the long and involved discussion around object databases.
  3. Anything that relies on a shared definition file that is used for code-generation purposes, what I often call The Myth of the One True Schema. Assuming a developer creates a working .proto/.idl/.wsdl definition, and two companies agree on it, what happens when one side wants to evolve or change that definition? Who gets to decide the evolutionary progress of that file?

Anyone that has considered using a "standard" has had to grapple with these types of issues. All standards gain their generality by having to trade-off for something (speed, size, etc.). This is why most developers choose to build proprietary systems that meet their specific internal needs. For internal purposes, there's generally not a need to compromise. PB is a good example of this.

This also seems to be true in the medical device industry.  Within our product architectures we build components to best meet our customer requirements without regard for the outside world. Interfacing with others (interoperability) is generally a completely separate task, if not a product unto itself.

Interoperability is about creating standards which means having to compromise and make trade-offs.  It would be nice if Healthcare interoperability could be just a technical discussion like the PB vs. XML debate. This would allow better integration of standards directly into products so that there would be less of the current split-personality (internal vs. external  needs) development mentality.

Another thing I noticed about the PB announcement -- how quickly it was held up to XML as a competing standard. With Google's clout, simply giving it away creates a de facto standard. Within the medical connectivity world though, there is no Google.

I've talked about this before, but I'm going to say it again anyway. From my medical device perspective, with so many confusing standards and competing implementations the decision on what to use ends up not being based on technical issues at all. It's all about picking the right N partners for your market of interest, which translates into N (or more) interface implementations. This isn't just wasteful, it's also wrong. Unfortunately, I don't see a solution to this situation coming in the near future.

Goosh, a Google Command Line

Monday, June 2nd, 2008

For us old Unix hackers, Goosh, a Google Command Line is very cool.

Check in out here: Goosh.org.

Google Health Launches: More PHR for the masses.

Monday, May 19th, 2008

It's finally here: Drumroll, Please: Google Health Launches!

If you use any of the Google applications (like Gmail), it's just as easy as all the others:

Google Health

It will be interesting to see if this and HealthVault have an impact on how patients interact with their medical service providers. The privacy and security issues are certain to remain a significant barrier to adoption. Only time will tell.

UPDATE (5/23/08): See Delving Into Google Health's Privacy Concerns

UPDATE (5/24/08): Apparently this Slashdot reference is "uninformed": Why Google Health and HealthVault are not covered by HIPAA.

UPDATE (7/6/08): I ran across this post that talks about Microsoft HealthVault security: You Will Never Get It Microsoft. Here's a quote from it:

Microsoft obviously think that I don't know how HealthVault works. I don't have to know how it works, I only know that it will and can be abused one day.