October 24, 2018

Rise of AI in Healthcare Raises Important Questions About Safety, Liability and Privacy

By Megan Diamond, Program Manager, HGHI

Caller: ‘Hi, I would like to reserve a table for Wednesday, the 7th

Hostess: ‘For 7 people?’

Caller: ‘Um, it’s for 4 people’

Hostess: ‘4 people.. when? Today? Tonight?’

Caller: ‘Wednesday, at 6pm’

After playing a one-minute audio clip of a restaurant reservation call during a recent seminar at the Harvard Global Health Institute, Sara Gerke looked at her audience. “Which one do you think is the robot?” she asked the 50+ Harvard and MIT affiliates assembled in the room. “The caller or the hostess?”

Gerke, a research fellow for precision medicine at the Petrie-Flom Center for Health Law Policy, Biotechnology and Bioethics and Glenn Cohen, the Center’s faculty director and a professor at the Harvard Law School, had come to HGHI to kick off our new seminar series, which examines how artificial intelligence is revolutionizing health care, with a special focus on challenges and opportunities for low- and middle-income countries. (Click here for more on this new HGHI program area.)  

“The question about the reservation call was a set-up, of course, but still, it was hard to believe that the AI-powered bot should be the caller – clearly, he was the one with more situational awareness and comprehension than the human hostess in this interaction.”

Last May, Google CEO Sundar Pichai made headlines playing this call at Google’s annual I/O conference in California (a gathering for developers to share their latest innovations), showcasing how an AI-powered bot – your future automated personal assistant – can now make “eerily lifelike phone calls for you” (The Guardian.) 

In her talk, Gerke used the call to illustrate the gap between what we think artificial intelligence can and can’t do, and what artificial intelligence is actually capable of.

This is especially true in health care delivery: Today, there are companies that leverage AI to analyze images that can diagnose cardiac conditions at the same accuracy as experts. There is an app that uses AI to help people self-diagnose and provides advice on next steps, including whether to see a specialist or go to the emergency room. Visually impaired people can now identify products in a store and recognize faces thanks to AI-assisted eye wearables.  

The juxtaposition of the pervasiveness of AI, and the naïveté of when it is or isn’t being used, poses enormous ethical and legal challenges that impact people globally. In their talks, Gerke and Cohen highlighted three main ethical and legal challenges surrounding the use of AI in healthcare – safety, liability and data privacy.

For example, who is responsible when a doctor follows the treatment guidelines offered by an AI-assisted technology and a patient has a poor outcome? Currently, the answer is unclear.

“In a world where doctors are using a tool that is programmed in a way that even they don’t understand, there is an urge to consider adopting a new view of liability,” said Glenn Cohen.

Cohen suggested looking to other liability schemes in the healthcare space as a start, such as the National Vaccine Injury Compensation Program. This pool of money, created by a small tax applied to every vaccine, is used to compensate patients harmed by the rare but known side effects of vaccines but without assigning blame. As a result, companies are not disincentivized to develop vaccines and the public continues to be benefitted by this essential public health intervention. 

Could this be applied to AI-assisted technologies? The answer, for now, is maybe: The frequency and types of adverse events from vaccines are well known and documented, making it clear what complications give rise to a claim. However, unlike vaccines, AI-algorithms are constantly evolving and that means the complications from errors are, too. This critical difference makes it challenging to emulate the approach that has worked so well for vaccines.  What is clear, though, is that finding a balance between pursuing the immense potential of AI-assisted technologies, and holding someone (or something) responsible, is critical.

“From pacemakers to in-vitro diagnostics, we need to assure an adequate level of oversight to make sure AI applications are effective and save,” said Sara Gerke.

Gerke and Cohen also highlighted data privacy as a major concern. In the U.S, current data privacy regulations only cover traditional types of patient data, such as notes in an electronic medical record. Informal sources of data such as google searches, online purchases, or social media posts can provide a surprising amount of insight into someone’s health — already, Instagram filters are as good at predicting depression as regular screening — but these types of data are not covered by the same legal protections. In this new context, the confidentiality we are afforded when disclosing personal health information in the doctor’s office starts to lose its relevancy as our movements, ‘likes’, and shopping habits become the getaway to our health, but also our secrets, and it is unclear if and how we can opt out of this data being collected.

AI & Tech Tweet

Mapping and Repairing the Brain: Implications for Global Health: Join us for our next seminar in the Tech & Health series with Ed Boyden of MIT’s Synthetic Neurobiology Group on Wednesday, November 14, from 2-3 p.m.