Who is responsible when AI makes a medical mistake?

Illustration of a robot holding an X sign next to a document and a judge's gavel, symbolizing legal issues or regulations—such as who is responsible when AI makes a medical mistake.

Between patients, you may toggle between charts and an AI assistant, prompting it to summarize a complex medical history, draft a prior authorization appeal and flag potential drug interactions. Minutes later, it’s suggesting differential diagnoses based on symptoms that would have once required flipping through reference materials or relying solely on memory. The efficiency is undeniable, but what should be independently verified, what belongs in the medical record and ultimately, who is accountable if something goes wrong? 

As artificial intelligence becomes more embedded in clinical workflows, physicians are navigating a shifting legal landscape where the line between assistance and liability is still being defined. “It remains to be seen how the use of artificial intelligence impacts medical malpractice liability,” Lauren DeMoss, healthcare attorney and co-chair of Maynard Nexsen’s health care and life sciences practice, shares with Sermo. “There is not a lot of case law, but we can certainly expect to see more in the coming years.”

Physicians on Sermo are discussing what they consider to be responsible use of AI tools. Some find them generally reliable. “I have utilized OpenEvidence (a collaboration between JAMA and NEJM) and even occasionally referred to it, especially in the setting where patients have been skeptical of recommendations,” one physician writes. “It gives links to the supporting data for its answers and I have found it to be free of hallucinations (so far).”

Other Sermo members point out limitations. “It’s incomprehensible to be carried away by something that isn’t proven, especially in our field,” one doctor writes. “However, I acknowledge using AI to gather specialized information in my books; I wouldn’t conceive of the tedious method of searching for citations, reprints, and summaries on a specific topic without such a procedure.” Another Sermo member believes that AI is “useful for consulting but it cannot turn into the main decision-making tool.”

As AI becomes more widespread in medicine, it raises the question of whether physicians can be sued if they follow, or ignore, a diagnosis suggested by an algorithm. Explore the evolving landscape of medical AI liability and how AI may influence the standard of care.

Disclaimer: This article reflects real conversations taking place within the Sermo physician community and is published for educational purposes only. It does not constitute legal or medical advice. The information provided is general in nature; laws governing medical malpractice, standard of care, and liability vary significantly by jurisdiction. Physicians should contact a qualified legal representative for advice specific to their circumstances. Quotes from community members have been anonymized.

Are doctors liable for AI errors?

When software fails, developers may face product liability or negligence claims, depending on how the technology is classified and the jurisdiction. Product liability claims treat the software as a defective product. Consider the Raine v. OpenAI lawsuit. The case centers on allegations that ChatGPT generated harmful content that contributed to user Adam Raine’s death, raising questions about whether the developer can be held strictly liable for the outputs of its technology. Since no physician was involved, the legal focus is on the product itself—whether it was defectively designed or insufficiently safeguarded. 

Plaintiffs in cases like this attempt to treat AI tools similarly to other consumer products, arguing that companies should bear responsibility for foreseeable harm caused by their systems. “A human therapist would have recognized that Adam’s escalating presentations could have been him wanting to be given reasons to not die, but the AI, by its very design, could not tell the difference between his persistence and an actual demand for encouragement,” a physician on Sermo asserts regarding the case.

Physician scenarios, however, are evaluated through a different lens. If a doctor uses an algorithm in patient care, the legal focus is likely to remain squarely on the doctor’s clinical judgment. In 2026, case law related to the use of AI in healthcare is thin. But, based on existing EHR-related malpractice cases, courts tend to focus primarily on how the physician interpreted and acted on the software’s output—and it is likely that the same reasoning and underlying legal principle (that the physician-patient duty of care is non-delegable) will apply. Courts may consider the following: 

  • Did the clinician critically evaluate the recommendation? 
  • Was it reasonable to rely on it in that context? 
  • Would a similarly trained physician have made the same decision?

This distinction creates a growing gray area as AI use becomes more widespread. AI systems can generate erroneous outputs that may still appear authoritative. If a physician follows an AI-generated recommendation that leads to harm, they could face liability for over-reliance. On the other hand, if they ignore an AI-generated warning that later proves accurate, that decision could also be scrutinized as a missed opportunity to meet the evolving standard of care.

As cases like Raine v. OpenAI move through the courts, they may help clarify where responsibility falls for AI developers. AI may inform decisions, but it does not absorb liability, according to one Sermo member with medicolegal experience. “A doctor’s role as the final clinical decision-maker cannot be ceded… AI is neither ethically nor legally a replacement for expertise and judgment.” 

How AI ‘hallucination’ can impact your decision-making as a physician

Generative AI doesn’t have a human’s reasoning abilities, a Sermo member with medicolegal experience points out: “AI operates through the triggering of successive algorithms… this is also only a simulacrum of thought.”

Those algorithms are infamous for confidently supplying false outputs, often referred to as AI hallucinations in medicine. Sometimes, the software will even fabricate study results or invent citations that do not actually exist.

Because of this, in practice, clinicians are generally expected to verify algorithmic conclusions. Medical-grade platforms like OpenEvidence, which links directly to supporting clinical data, is generally a safer choice than relying on consumer-grade tools like ChatGPT for complex clinical queries. “ChatGPT is notorious for being your neediest wannabe friend, telling you how great your ideas are and even couching statements that you are wrong in affirmations of how brilliant your question was,” cautions a physician on Sermo.

How to responsibly integrate AI in medical care

“Existing cases still rely on ‘reasonable physician under similar circumstances’ standard, regardless of whether AI was used,” DeMoss notes. This standard still prioritizes human judgment over algorithmic outputs. “Currently, there is no doctrine or precedent to assign any full or partial responsibility to the AI system and its developer, even when the AI recommendations directly impact patient care,” says DeMoss. “But, in preparation of what is to come, clinicians and AI technology developers should both assess liability insurance coverage to determine what is excluded from coverage.”

Acting “reasonably” can require a critical outlook, since generative models are often built to be agreeable with a people pleaser bias. In a clinical setting, this can manifest as the software generating fake data to support a physician’s suspected diagnosis. Responsible AI use “is going to be grounded in ensuring accuracy and that will mean compensating for any inherent bias in an AI tool that is likely to act as a ‘people pleaser’ and may even hallucinate responses to give the user what they seem to want,” states one physician on Sermo.

Legally, the physician is the final barrier to avoid harm. Following an algorithm into an error is frequently viewed as a failure of human oversight. “The doctor is in the position of being the ‘last clear chance’ to avoid an AI input that is wrong from translating into actual care,” states a Sermo member with medicolegal experience.

If you disagree with an output, your clinical rationale can protect you. “If the doctor was aware of the AI finding but did not believe that it should be followed… the defense would be that the doctor needs to show that the AI result was not likely correct,” contributes one physician to the debate. 

Key takeaways for physicians

In the eyes of the law, algorithms are an augmenting tool rather than a replacement for clinical judgment. When utilizing these systems, strong documentation includes the “why,” specifically mentioning the algorithmic input and clearly stating your clinical rationale for either following or rejecting it. “Documentation and critical judgment are going to matter more than ever,” asserts one Sermo member.

As seen in Raine v. OpenAI, plaintiffs may pursue tech companies for damages when a physician isn’t involved. “The primary claim in the case is under strict liability… treating ChatGPT as an inherently defective product,” notes a physician on Sermo. In cases where a physician utilized an AI tool, the primary legal framework is typically professional negligence. 

Physicians have an independent, non-delegable duty to the patient. “These duties then mandate the responsible use of AI by professional standards,” writes one Sermo member. Another community member puts it this way: “Doctors using AI are obligated to be a parking stop at the least and a spike strip at the most, evaluating whether the AI input should be allowed to drive forward into the care.” A third member cautions that “AI is only a threshold tool that is then subject to human review.”

Your role as the clinical adjudicator

While artificial intelligence is designed to be helpful, it is not necessarily accurate. Your role as a physician is to act as the “clinical adjudicator,” filtering algorithmic outputs through a lens of clinical expertise to ensure patient safety.

Physicians on Sermo are navigating this new territory together, discussing how they use AI tools while maintaining a critical eye. Through the online community, physicians are learning from one another’s real-world experiences across specialties, countries and legal systems. Join for free and connect with more than 1 million physicians globally.

This article has been medically reviewed by a member of the Sermo physician community.