Revealing Healthcare Networks Using Insurance Claims Data


As I noted in my post last week, every healthcare accountable care organization in the United States is trying to understand provider networks. Common questions include:

  • What is the “leakage” from our network?
  • What medical practices should we acquire?
  • What are the referral patterns of providers within the network?
  • Does the path that a patient takes through our network of care affect outcomes?
  • Where should we build the next outpatient clinic?

Much of this analysis is being done by using insurance claims data, and this post is about how such data is turned into a provider network analysis.  Here, I’ll discuss how billing or referrals data is turned into graphs of provider networks.  Most of us are now familiar with social networks, which describe how a group of people are “connected”.  A common example is Facebook, where apps like TouchGraph that show who you are friends with, and whether your friends are friends, and so on.  These networks are build with a simple concept, that of a relationship.

To describe a physician network, we first make a table from claims data that shows which physicians (D) billed for visits or procedures on which patients (P).  This is shown in the figure below.  Next, we tally which physicians billed for the seeing the same patient, and how many times, giving a common billing matrix.  The billing does not have to happen at the same visit or for the same problem, just over the course of the measurement period. Notice that the matrix is symmetrical, with the diagonal giving the total number of patient encounters for each doctor.  This type of matrix is referred to as a distance or similarity matrix.


The provider network graph plotted from the above example shows the network relationship between four doctors.  The size of the circle shows total number of patients billed for by that doctor, and the width of the line shows the strength of the shared patient connection.


Now, if we have this data for a large network, we can look at a number of measures using standard methods.  In the above example, we can see that the two orange providers are probably members of a group practice, sharing many of the same patients and referring to many of the same providers. See this humorous post by Kieran Healy identifying Paul Revere as the ringleader of the American Revolution using a similar analysis!  Providers in red are “out-of-network”, and with connections to a single in-network physician.  However, the graph itself does not reveal the reason that these out-of-network providers share patients with the in-network provider.   It could be that the out-of-network group offers a service not available within the network, such as gastric bypass, pediatric hepatology, or kidney transplantation.

It is not difficult to see that you could create network representations using many types of data.  Referral data would allow you to add directionality to the network graph.  You could also look at total charges in shared patients, as opposed to visits or procedures, to get a sense of the financial connectedness of providers or practices.  Linking by lab tests or procedures can show common practice patterns.  Many other variations are possible. Complexity of the network can increase with the more providers and patients in the claims data you have.

These simple graphs are just the beginning.  Couple to network graph with geospatial locations of providers, and you add another layer of complexity.  Add city bus routes, and you can see how patients might get to your next office location.  Add census data, and you can look at the relationship between medical practice density, referral patterns, and the average income within a zip code area.  The possibilities are incredible!

So why is this big data?  To build a large and accurate network, you  need to analyze millions of insurance claims, lab tests, or other connection data.  Analyzing data of this size requires large amounts of computer memory and, often cluster computers, and distributed computing software such as Hadoop (more on this in a future post).  We owe a very large debt to the “Healthcare Hacker” Fred Trotter, who created the first such open source, very large, network graph from 2011 Medicare claims data for the entire United States, called DocGraph. The dataset can be downloaded from NotOnly Dev for $1 here.  This graph has 49 million connections between almost a million providers.  Ryan Weald created a beautiful visualization of the entire DocGraph dataset, which I will leave you with here.


Primary Care Genomics: The Next Clinical Wave?

DNA Double_HelixIs the main barrier for in healthcare analyzing and connecting the massive amounts of data present in electronic medical records, or is it generating the right data at the right level?  To really move healthcare forward, argues Michael Groner, VP of engineering and chief architect, and Trevor Heritage, we need to move research-level testing (whole exome sequencing, genomics, clinical proteomics) outside of the research environment and make it widely available to primary care physicians.  According to Groner, only when we amass large collections of such data will the true value of big data analytics methods be realized in medicine.

“It’s untenable to expect every physician or health care provider interested in improving patient care through the use of genomics testing to make the costly capital and other investments required to make this science a practical reality that impacts day-to-day patient care. Instead, the aim should be to connect the siloed capabilities associated with genomics testing into a simple, physician-friendly workflow that makes the best services accessible to every provider, regardless of geography or institutional size or affiliation…The true barrier to clinical adoption of genomic medicine isn’t data volume or scale, but how to empower physicians from a logistical and clinical genomics knowledge standpoint, while proving the fundamental efficacy of genomics medicine in terms of improved patient diagnosis, treatment regimens, outcomes and improved patient management.”

It’s a great dream, and parts of it will be realized in the future, but ignores many of the realities of in-the-trenches medical practice and medical science.  Genomics medicine will simply not improve the diagnostic acumen for many clinical problems; it’s just the wrong method.  Some examples include fractures, appendicitis, stroke, heart attacks, and many others.  Sequencing my genome will not diagnose my diverticulitis.  This has nothing to do with making genomic science and whole genome analytics a practical reality, but rather matching the tools to the appropriate medical problem and scale.  Genomics is quite good at providing information about genetic risk of conditions, but not necessarily diagnosing them.  Knowing that somebody has the BRCA1 breast cancer gene mutation does not tell you if they actually have breast cancer, and if they do which breast it’s in, whether it has metastasized, and where.

Groner’s larger point about the need to use data science to make personalized medicine a real-time reality, however, is well taken.  For example, the new guidelines for treatment of cholesterol abnormalities with statins, powerful cholesterol lowering drugs, are based on a risk score that no provider can calculate in their head.  Personalized medicine could evolve to generate a personalized risk assessment, based on a risk score for cardiovascular disease.  Beyond this, one could imagine the risk score being modified by a proteomics analysis of subtle serum proteins and their associated contributions to cardiovascular risk, and a genomic analysis of hereditary risk.  Integrating this evidence and providing clinicians with some measure of how to weight the predicted risk factors when making treatment decisions, are true growth areas for medical genomics and health informatics.