When we think of physicians in the future, we envision someone who has the world’s medical experience and knowledge at his or her fingertips. This image has woven itself into our cultural fantasies—witness science fiction characters, such as the “emergency medical hologram” of Star Trek: Voyager. Is it possible that future technology will achieve this? Where do we stand in our evolution towards a state where medical knowledge can be synthesized and presented to clinicians, so that all clinical decision-making is current and informed, is based on evidence and best practices, is individualized to the patient at hand, and is immediately available?

Access to general knowledge has advanced so dramatically in the past 10 years that we almost take it for granted. We have become accustomed to maps and turn-by-turn directions anywhere in the world from our smartphones. We can search the Internet for anything and instantly retrieve what we are looking for. It has become woven into our cultural fabric. We are used to being able to conduct reliable searches using ordinary language, full sentences, or keywords—and do so from our phones, our computers, even from home devices that respond to spoken inputs.

Google has advanced this capability when it introduced the Knowledge Graph to its search capabilities in 2012. With the Knowledge Graph, semantic-search information gathered from a wide variety of sources provides structured and detailed information about the topic so that the user can see everything relevant in a single view, without having to navigate to other sites and assemble the information by hand.

In medicine, we are not there yet. Health data remains fragmented into institution-centered silos. In the past few years, significant advances have allowed the sharing of copies of individual records between institutions, but that still leaves the data fragmented. Business and policy impediments have resulted in institutions resisting the blending of their clinical data into single universal patient-centered data stores. There is no business motivation for doing that, and there are regulatory and policy hurdles to overcome.


The Flow Health Medical Knowledge Graph

Similar to Google’s Knowledge Graph (which is proprietary to Google), the Flow Health Medical Knowledge Graph arises from using Machine Learning (ML) to analyze large amounts of clinical data and identify patterns. ML can be thought of as a collection of computer algorithms that can work on large data sets, and the result of ML on those large data sets is Artificial Intelligence (AI). So, using these terms, AI is the result of ML working on big data. But that is not quite enough—it would be like Internet searches prior to the Knowledge Graph. The Medical Knowledge Graph takes the results of AI findings and assembles them in a way that the desired information is quickly available in a single view, context-sensitive, and individualized to the patient at hand.

The state of ML, more specifically deep learning algorithms, has become pretty advanced. What has been limiting so far has been the availability (or non-availability, actually) of large data sets, particularly data sets organized in modern ways that are amenable to ML activity. That is what is so momentous about the recent collaboration between Flow Health and the Department of Veteran’s Affairs (VA). It brings the clinical data found in all of the VA medical facilities across the country and amasses that data in a modern fashion on the Flow Health platform, making it available to ML. To get an idea of size, the VA is the country’s largest integrated health delivery system, and, in aggregate, cares for almost 9 million veterans and contains records of 22 million veterans over a 20-year history. It includes lab values, imaging, DNA genomic data, prescription and diagnosis history, and clinical notes. In all, there are about 30 petabytes of data. Beyond structured data, there are 4 billion clinical notes that can be combed using Natural Language Processing, and 4.5 billion medical images that power deep learning on images.

With a rich data set like this, powerful AI can be built. Insights into disease can be garnered like never before. Individualized treatment recommendations, based on deep knowledge from a vast store of data, can be made at the point of care.


What does this mean for the practice of medicine?

So does that mean that medical clinicians will be replaced by AI robots—the “medical hologram” of science fiction? No. A Google search retrieves information but does not make a decision. It informs the ones doing the search, it does not replace them.

In clinical medicine, we do much the same thing, though we may not be conscious of it. We examine a patient and try to get a pattern of what is happening, using history, observation, examination, laboratory, and imaging findings. Sometimes the pattern we see is dense with data, sometimes it is more limited. But we get a sense of what the presenting situation is. And then we compare that pattern to similar ones that arise from our learning and our experience. We use that comparison to suggest what to do. We make an executive decision, a clinical judgment, based on that pattern-matching process.

The limitation, of course, is that the data set we use for comparison is limited to our own experience, and the perception of the situation at hand may be limited as well. This results in a range of recommendations made by different clinicians over very similar situations.

Aided by the Flow Health Medical Knowledge Graph, a clinician has a powerful tool. Specific suggestions can be surfaced both to the clinician and to the patient, asking questions and gathering additional information, in order to better define the current condition. Then, that more-robust workup—that definition of the problem—will help guide pattern matching. But this time, the pattern is matched against a huge data store. The clinician has the equivalent of the world’s experience and learning, in order to make very individualized recommendations—ones that work the first time, without as much trial-and-error as is commonplace today.

We are at the beginning of this journey. The Medical Knowledge Graph is starting to get populated by very large data sets, like the one from the VA collaboration. The identification of patterns within medical data will take a leap forward unlike anything we have seen before. The actual tools used by clinicians and patients will evolve in order to use the Medical Knowledge Graph effectively. Just like the Google Knowledge Graph is a background tool, and searches can come from any of a variety of devices and browsers, the Medical Knowledge Graph can drive Electronic Health Records systems used by clinicians, or consumer-facing apps designed for patients. The next generation of EHRs and other technologies will be powered by this technology. Doctors of the future will have access to the world’s medical information and can make informed recommendations, based on an individualized understanding of a specific person’s situation. It’s not science fiction anymore.


  1. Beyond just doctors, what if other health practitioners or even consumers could have all of the world’s medical knowledge at their fingertips? What if that’s not just information and knowledge but actionable insight and wisdom? What will happen in medicine when AI systems like IBM Watson evolve to the point where they can access remote sensors, monitor patient vitals in real time, identify trends, compare them with population norms, and provide online coaching to optimize short-term and long-term results?

    As the pace of tech innovation continues to advance exponentially, and as medical devices continue to get smaller, cheaper, more accurate, and easier to use, a great many of the functions associated with doctors in hospitals and clinics will likely move down-market to consumers at home or on the go. Here are two of my articles discussing these issues:



Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.