Geospatial Data and HIPAA

How have privacy regulations affected the use of GIS data?

Since 1854, when John Snow used geospatial mapping to locate the well spreading cholera in London, GIS data has been a cornerstone of public health and epidemiology research.  Today, a wealth of data sources are available for research.  For example, locate a patient within a census tract in the United States, and a variety of information such as average income in the area, demographic data, and other census information can be linked directly to your patient-specific study data.  Alternatively, in this innovative study from Brazil GIS mapping software was used to determine that the distance an expectant mother had to travel through urban transportation networks to reach healthcare was an important risk factor for death during pregnancy.  Similar studies have used GIS data to examine infant mortality, rural population HIV-mortality, and tuberculosis control measures.  While geocoding large amounts of data for medical epidemiology studies can be extremely informative, you need to be careful not to run afoul of government privacy laws, especially the HIPAA privacy rule in the United States.

The Health Information Portability and Accountability Act (HIPAA) rules define personal health information (PHI), which may include diagnoses, test results, payment or visit information.  The intent was to protect people against disclosure of health information in conjunction with information that could reveal their identity.  This identification information consists of 18 identifiers, such as name, social security number, and date of birth.  The definition of “identifiable information” also includes any data that would allow another person to re-identify a person directly or indirectly without access to a specific code or key.  For geospatial information, the personal identifiers include a person’s street address and ZIP code.  GIS coordinates are considered  an “equivalent geocode”, meaning that they are as good as a street address.  Imagine a map plotting the location of eight people infected with HIV in a sparsely populated rural area.  It would not take much to match that data up with a specific person.  The point is that all such information needs to be de-identified before it can be released or worked on outside of a HIPAA compliant data storage and analysis environment.

De-identification of GIS data in healthcare research can be thought of as a two part process:  de-identifying data while obtaining a set of coordinates used to plot a person’s location, called geocoding, and de-identifying the data when presenting the results of your research.

Geocoding is the process of translating an address into set of XY coordinates that can be used to plot a location on a map.  You could do this easily by feeding a list addresses into one several geocoding services on the internet such as bulkgeocoderGoogle, Mapquest, cloudmade, or  ArcGIS Online.  But, if you have lists of patient data, this could be a massive HIPAA violation. The best way to make sure you are HIPAA compliant is to use a geocoding firm with which you have a business association agreement (BAA) that will take your information and generate the geocodes in a HIPAA compliant and secure environment. An important best practice is to process a list of addresses that have been separated from any other information, and can only be linked by a secure, randomized key.  Once the geocoding service returns your data, you can link it back to your complete research file.  It is unclear, however, whether submitting a list of  addresses using an e-mail address containing information about a diagnosis (e.g. Researcher@DiabetesInstituteResearch.Org) outside of a BAA would constitute a breach, since one might infer the diagnosis of people at addresses on the list from the organization name.  Best to consult your organization’s privacy officer about this issue.

Once you have done your analysis, and wish to publish plotted geocoded data, it must be done in a way that you cannot identify an individual by examining the data set alone or in combination with other publicly available data.  Think of the map of firearm owners in Westchester county published by a local newspaper.  If it had been a map of people with a diagnosis of leukemia, it would have been a HIPAA violation.  Deidentification methods could be quite sophisticated, such as statistical de-identification.  An interesting workshop sponsored by the department of Health and Human Services discussing these issues can be found here.  Several methods are available to avoid this pitfall:

  • Point aggregation – combining points into geographic bins, such as zip code areas, counties, states, or other areas.  This way, no individual data point is identifiable as a person, but the bins must have a sufficient population and subject density.
  • Geostatistical analysis – One example is creating a probability map, where any area represents the probability of a study subject having a particular condition or value.  Again, no individual points are plotted.
  • “Jittering” data involves adding or subtracting some random values to a precise GIS location so that an individual point is not precisely located on a diagram.
  • Data point displacement by translation, rotation, or change of scale.

Resolution of the map is also important, as is the population density of the area you are plotting data for.  One needs to be careful, as well, that the de-identification methods do not change the validity of your research results.

So, the use of large GIS data sets is a tremendous opportunity for population health research, but requires specific practices with respect to de-identification when analyzing and publishing that data.  Geocode and aggregate carefully!

Norovirus, Networks, and Big-Data

Another norvirus outbreak has been in the news related to a group of cases on a cruise ship.  With over  700 passengers and crew falling ill, it is one of the largest outbreaks on a cruise ship ever reported.  Norovirus is a highly contagious member of the Caliciviridae family, and contains multiple genotypes and subtypes.  Small mutations in the norovirus genome lead to new strains, similar to the phenomenon of antigenic shift in influenza viruses.  Larger mutations can lead to pandemic strains when the prevailing population immunity to older strains is no longer effective against the new strain.  The United States is in the midst of the norovirus season, with a new strain being responsible for most cases.

How is Big Data Science revolutionizing the tracking and prediction of norovirus outbreaks?  The US Center for Disease Control now tracks norovirus outbreaks through a combination of traditional outbreak surveillance as reported by public health departments around the US and confirmed by molecular testing of specimens from symptomatic individuals. But, an alternative Big Data real-time social media monitoring approach is being tested in the UK by the Food Standards Agency.  Tweet the hashtag #Barf in London, and your tweet will be added to the FSA statistics, along with the geographic location.  About 50% of gastrointestinal intestinal illnesses in the US and UK are caused by norovirus, so tweets and Google Searches about stomach cramps, vomiting and diarrhea have a high likelihood of being norovirus related!   FSA researchers found an upswing in hashtags describing GI symptoms occurred 3-4 weeks before an outbreak was identified by traditional laboratory surveillance.

#Vomit:  Predicting Norovirus Outbreaks with Twitter

So how can Big Data Science contribute to solutions?  Recognizing outbreaks in real time using Big Data analytics is a start.  Taming data velocity and volume are key here.  Early recognition can lead to containment and public health strategies can limit the outbreak.  But potential solutions go beyond larger public health responses.  One of the major ways individuals can prevent the spread of the virus, and themselves from being infected, is simple good hygiene such as had washing.  Norovirus outbreaks occur more frequently in places where people are living together and have risk factors such as being elderly, immunosuppressed, or very young.  Day care centers, nursing homes and hospitals are the key areas.  In a novel application of Big Data Science real-time analytics, IBM has developed a method of tracking handwashing among healthcare workers after each patient contact.  An RFID tag carried by the worker, couples with sensors which record entry into the room, exit, and use of a hand sanitizer dispenser, have lead to pronounced increases in had-washing.  The data is still out on whether this will reduce infectious outbreaks or their spread, but if the promise bears out, look for such systems in high risk areas such as institutional kitchens, day care centers and other areas.  It does seem a bit Big Brother-ish, which is a topic for my next post…

For now….wash your hands, tweet your symptoms, and stay healthy!

Big Data and the Flu

Flu is one area in medical science where Big Data has made inroads.  Every year, the seasonal influenza strains make their way across the world, infecting tens of millions of people, causing serious illness and even death. Every decade or so, a new stain of flu emerges which differs radically from the prior strains that we have been exposed to.  Because we lack immunologic memory to these pandemic strains, they are able to cause much more serious illness in large segments of the world population.  Thus, tracking the emergence of new influenza viral strains, population level infections, and studying immune responses to both infection and vaccination has generated very large amounts of data, much of which is now publicly available. Basic data is collected each year on the genetic make-up of circulating flu viruses to select the strains which will be included in the next year’s influenza vaccine.  This involves collecting flu virus specimens from all over the world, genetically sequencing them, clustering viruses by sequence similarity, and picking the emerging strains that differ enough to need a new vaccine.  The process culminates in February, when new vaccine strains are chosen by the World Health Organization and the Center for Disease Control. All of this activity has let to and explosion in the number of data sets and types available for public use.  Data sets for influenza research span the gamut of informatics.  At the basic science level, the Influenza Virus Resource contains an extensive database of influenza virus protein sequences.  The Influenza Research Database contains sequences and more, including immune epitope data (which sequence segments protective antibodies or cells recognize on the influenza virus).  These data sets allow scientists to determine how related viruses are to each other by sequence comparison.  A novel dimensional reduction method, which makes use of multi-dimensional scaling (MDS) methods, termed “antigenic cartography” can be found here.  Multidimensional scaling allows complex relationships between influenza virus sequences and vaccine immunity to be reduced to a distance or dissimilarity measure and plotted in two dimensions.  This visualization method allows researchers to show how related different strains of flu are in the immune response they generate.  Other groups, such as my laboratory, have performed detailed time-series experiments and collected data on individual immune responses, including measurements of 24,000 different genes each day for 11 days after vaccination in multiple subjects.  The raw RNAseq gene expression data for this experiment takes up approximately 1.7 terabytes.

Google Flu Trends predictions of influenza activity by web search term aggregation

At the other end of the spectrum are near-real time data on influenza cases across the United States tracked by the Center for Disease Control, in Europe, across the world, tracked by the World Health Organization. A particularly relevant Big Data App is the Google Flu Trends site, which uses big data aggregation methods to tally Google searches for influenza related terms by  geographic location.  Google search activity increases during the seasonal influenza outbreaks, and parallels data from the CDC of confirmed cases of influenza or influenza-like illnesses.  A great example of the “Four V’s” of Big Data in use:  Volume, Velocity, Variety and Veracity. One of my colleagues, Henry Kautz at the University of Rochester, and his graduate student Adam Sadilik (now @Google) have taken this a step further, estimating your odds of getting influenza or other illness by real-time analysis of GIS-linked Twitter feeds!  A demonstration can be found on their web site GermTracker.  They use microblog GIS information coupled with AI methods to analyze key word content linked with illness to determine the likelihood that you are sick, or that you have come into contact with somebody who is sick.  They then predict your odds of coming down with the flu in the next 8 days based on your location. What has been done for flu in terms of public big data can be done for other infectious diseases at multiple levels, and we will likely see an increasing trend towards open source data, crowd sourced predictive analytics, and real-time big data analysis.

How Much Unstructured Big Medical Data Is There In The EMR?

EMR saladHow much unstructured big data is there in the EMR? Unstructured data is data that doesn’t fit into neat columns on a spreadsheet, or fields and look-up tables in a database, like the narrative text in an HPI. It used to be that we sat down with a pen and the paper chart, and wrote our progress notes in the office and in the clinic. Or, we dictated the notes, which were transcribed. But with the advent of the EMR, templates have crept in, as well as the wide-spread and controversial practice of copying and pasting text from a previous  encounter (see the recent NYT article).

This is  interesting in a quirky way. As physicians, nurse practitioners, and other providers have become reluctant data entry clerks, they use many shortcuts so that they will have time to take care of the patients, including templates with stylized or constrained vocabularies, self-generated “smart phrases”, and patient-specific narratives that can be recalled and modified.  The remainder of the note is populated with structured data already in the system (labs, test results, x-ray results).  Because medical changes are often not so dramatic from one day to the next,  the actual novel unstructured information content from one note to the next may only be a tiny fraction of the total bytes, and probably the change between the current and previous note may carry as much information than the actual content.  But, when people get hurried or sloppy, old information gets carried along that is no longer current, but has not been changed in the notes.  So, the key information extraction question is identifying the true changes, separating them from relatively static or outdated data that is carried along, and extracting the novel information.

How is this relevant to big data analytics in medicine?  If much of the content is captured by a stylized vocabulary, and filled with structured data already present in data tables, how much independent information will there be in a medical note?  If the data has dependencies because of this stylized nature and controlled vocabularies, how does this impact data mining and statistical analytics.  I am not sure if this type of problem has a formal technical term in machine learning, but if not it is likely to get one soon!