A recent report published in the Journal of the American Medical Association (JAMA) sheds light on the complexities surrounding the integration of artificial intelligence (AI) in healthcare. A panel of nationally recognized experts wrote the report, led by Professor Derek Angus at the University of Pittsburgh, Professor Glenn Cohen from Harvard Law School and Professor Michelle Mello from Stanford Law School. They discuss the new complications that this use of AI tools raises for determining accountability when something goes wrong in the medical field.
The report’s release is timely, as healthcare organizations are at the precipice of widespread adoption of AI technology. That points to a need for more rigorous assessments of AI performance and that nation’s experience underscores the need for investment in digital infrastructure. Professor Angus points out that without more funding there is simply no way to test whether these AI tools are effective at all. He states, “For clinicians, effectiveness usually means improved health outcomes, but there’s no guarantee that the regulatory authority will require proof [of that].”
AI tools and applications are gaining traction and widespread usage in healthcare. This, in turn, may make it difficult for patients to establish liability in cases of suspected medical malpractice involving these technologies. That’s because, as Professor Cohen points out, the relationship between the parties involved can make legal action very tricky. He notes, “They may point to one another as the party at fault, and they may have existing agreement contractually reallocating liability or have indemnification lawsuits.”
Additionally, the report underscores that current AI in healthcare is rife with ambiguity. Professor Mello argues that this uncertainty increases costs across the whole AI innovation and adoption ecosystem. She states, “The problem is that it takes time and will involve inconsistencies in the early days, and this uncertainty elevates costs for everyone.”
His main point, Professor Angus, is addressing a very important problem, the most rigorously evaluated AI tools haven’t been effective. Those tools are typically the most underused in clinical practice. He observes, “There’s definitely going to be instances where there’s the perception that something went wrong and people will look around to blame someone.”
This detailed tech-focused analysis of AI tools in healthcare shines a light on the technological impact. It also addresses the legal and economic challenges of their deployment. The authors are optimistic that courts will be able to deal with these legal questions appropriately. They stress the importance of establishing clear accountability guidelines as AI continues to play greater roles in medicine.
While the report contains a wealth of information, it advocates for stakeholders to deeply consider the aforementioned challenges. It requires them to advocate for appropriate regulatory structures that address harms associated with the use of AI. It is essential to ensure that patients receive safe and effective care while allowing healthcare providers to innovate responsibly.

