view all news
Complete Story
 

12/18/2024

AI in Healthcare: Development Phases, Ethical Questions, and Liability Risks

 

Q&A with I. Glenn Cohen, JD, Deputy Dean and James A. Attwood and Leslie Williams Professor of Law, Harvard Law School; Faculty Director, Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics

Provided by OSMA’s exclusively endorsed medical liability insurance partner, The Doctors Company (TDC).

Q: You recently spoke at the TDC Group Executive Advisory Board meeting about legal and ethical issues relating to medical artificial intelligence (AI). Can you summarize your main themes?

A: While there are a lot of AI-related conversations going on regarding the risks of medical professional liability exposure for clinicians and organizations, the existing case law is pretty thin on the subject, and it may be that the liability risk is less than is supposed. At the same time, there are many ethical quandaries as well as sources of reputational risk to consider.

Overall, I am on the more optimistic side about medical AI. I think AI has real benefits for hospital systems, physicians, other practitioners, and ultimately patients. At the same time, there are serious concerns about privacy, bias, consent, and accountability, as I discuss in my work. And while it would be nice to say that keeping a human in the loop will always help, in fact the literature suggests that in many instances, adding human decision makers can produce performance worse than humans or AI acting alone.

Q: One thing you emphasized was that there are various goals for adopting medical AI, but they may have very different ethical valences. Can you say more?

A: I think the first question to ask is, “Why are we building this?” What are we hoping medical AI might do for us that is worth the risk or cost?

Here are some possible answers initially suggested by my friend and sometimes coauthor W. Nicholson Price:

Read the Full TDC Article Here >

 


In your inbox 

 

 

Printer-Friendly Version