Can physician and patients trust AI doctor apps like ChatGPT?

When asked: How often do your patients mention using AI tools like ChatGPT or symptom-checking apps before seeing you? 48% of physicians on Sermo say their patients frequently or occasionally mention using AI tools like ChatGPT before seeing a consultant.1

While it’s clear something has shifted in the way patients are accessing their healthcare information and support, what’s less clear is whether they are turning to AI because they want more control, or because they’re losing trust in the system.

It’s a question that gets to the heart of the unease many doctors now feel. According to a recent Sermo poll, 47% of physicians cite misdiagnosis or delayed care as their top concern when patients use AI for medical advice.1 Another 24% worry that AI simply lacks the clinical nuance needed for sound decision-making.

So, what is the risk of AI medical diagnosis apps into the consultancy room? Which patient groups are most likely to use AI, and what do doctors think about AI health apps? This article uses Sermo poll data and insights from Sermo’s physician community to find out.

The rise of the AI-informed patient

Apple’s recently announced “AI Doctor” assistant, a virtual health coach that uses real-time data from devices like the iPhone, Apple Watch and AirPods to deliver personalized wellness advice, food tracking, workout guidance and early risk detection,2 is the latest signal that health-related AI tools are moving into the mainstream.

While the technology promises convenience, access and instant answers, it also creates a new kind of patient, one who enters the clinic with AI-generated beliefs about their health.

“I now frequently encounter patients who have developed some sort of perspective from an AI-based platform,3” shared a Radiologist on Sermo. The experience isn’t isolated. 49% of surveyed physicians report seeing this trend in their day-to-day work.

This behavior marks a shift in how patients approach clinical care. It’s not just that people are Googling symptoms, but rather they’re now consulting complex algorithms that mimic human conversation, sometimes with a tone of authority that rivals medical professionals – and that makes things more complicated in the exam room.3

Why physicians are concerned about AI in medicine

47% worry about the risk of misdiagnosis or delayed care

The worry isn’t simply that AI gets things wrong, but that patients may act on partial or misinterpreted answers before seeking medical advice. “A good clinical history and physical examination can never be compared with a description of what the patient might think they have,3” said one GP. They noted that this misunderstanding can lead to critical delays, particularly when AI responses downplay symptoms that actually require urgent attention.

Elsewhere, a Cardiologist on Sermo echoed this risk, acknowledging that while AI might offer useful guidance in the right context, “a patient might not be able to interpret the results on their own in all cases.3” 

The concern is less about technology making mistakes and more about patients making decisions with incomplete clinical insight.

24% worry about the lack of clinical nuance in AI response

Another 24% of physicians raised alarms about AI’s lack of clinical nuance.1 These doctors see the diagnostic process as something that goes far beyond matching symptoms to conditions. 

Algorithms can offer probabilities, but they fall short in the interpretive complexity of real-world diagnosis, especially when symptoms are vague or coexisting conditions obscure the picture.

A Radiologist added that AI performance depends entirely on how questions are asked and what the model has been trained on.3 The issue isn’t capability in theory, but reliability in practice. That gap can be difficult to bridge when patients treat AI tools as definitive.

7% worry about privacy and data use concerns

Although fewer in number, 7% of physicians pointed to privacy and data security as significant risks.1

For clinicians working within strict regulatory environments, the idea of patients uploading sensitive health details into unregulated platforms is unsettling. A Family Medicine expert on Sermo expressed broad concern, including privacy and misdiagnosis, as part of a wider pattern of incomplete care.3

One Infectious Disease specialist was more direct: “I am concerned about lack of patient privacy with AI tools.3” These voices highlight one aspect of AI that the general public may not be aware of, but that clinicians grasp the risks of all too well.

16% worry about the erosion of trust in physician expertise

There are also concerns about eroding trust in physician expertise, an issue which was cited by 16% of respondents.1

Some physicians described encounters where patients came in with AI-generated diagnoses and filtered every clinical recommendation through that lens. “It takes patience and a lot of tact,3” said one GP. This shift in dynamic turns doctors into explainers rather than decision-makers. It could potentially weaken the relationship physicians have with their patients and reduce their speed of getting diagnoses and treatments agreed upon, where time is of the essence.

Which patients are turning to AI?

While 59% of physicians on Sermo believe it’s younger, more tech-savvy patients who are most likely to use AI for health advice, that doesn’t tell the whole story. As one physician noted, “Sometimes an older patient also uses it because their community influenced them.3

But is there anything else motivating this behavior?

Beyond age, 14% of physicians say the common thread is distrust in traditional systems.1 Patients disillusioned by past experiences or long wait times are seeking answers elsewhere. “AI will both help and hurt, depending on whose hands it’s in,3” wrote a Family Medicine expert who observed that many users are anxious or sceptical of mainstream care.

Others turn to AI due to circumstance. 9% of doctors said patients with chronic conditions use it for second opinions and 8% pointed to those lacking regular access to healthcare.1 In such cases, tools like ChatGPT may feel like a practical substitute. 

However, as an Internal Medicine member on Sermo emphasized, AI use should never stem from mistrust: “There is work for everyone if we want to optimize all these new resources.3” Trust, guidance and access remain key to safe adoption.

Ultimately, it’s the context of AI use that matters. If patients are turning to AI because they feel underserved or mistrust specialists, then it points to a systemic problem and one that can’t be solved by critiquing AI apps alone.

Is there any upside to patients using AI for their health?

Still, not every physician on Sermo views AI as a threat. 42% say they’re cautiously optimistic1 and several note its potential to support patient education provided it’s framed correctly.3

“AI may have good advice and utility in the right setting,” said a Cardiologist, “but a patient might not be able to interpret the results on their own.3” A GP echoed that sentiment, saying AI can help patients become more informed, but also flagged that it sometimes leads to inappropriate demands for testing and referrals.3

Used well, AI could serve as a springboard for more productive conversations in the clinic. One Dermatologist shared how they use it to engage patients: “It can be helpful in explaining to the patient why you either agree or disagree with AI.3

That kind of collaboration between physician and patient, not AI and patient, is where these tools could find a useful place.

What role should physicians play?

The question, then, isn’t whether AI will replace physicians. It’s whether doctors will be given the space and support to integrate these tools into the care journey in a way that benefits patients and doesn’t diminish clinical authority.

34% of physicians on Sermo say clinicians should take an active role in recommending and contextualizing trustworthy tools.1 Others suggest referencing established institutional sources and proactively discussing the limitations of AI with patients.3

“Even we, as highly trained professionals, know our limitations,” said a Psychiatrist on Sermo. “Nuances make for the art of medicine.3

Another physician put it more plainly: “AI can be beneficial, but we should educate our patients on its limits and make it clear that it doesn’t replace one-on-one attention.3

That’s the role many doctors are already playing: not rejecting AI outright, but translating it, correcting it and reframing it in ways only a human clinician can.

Your takeaway

So, will AI replace doctors?

Not quite. But patients are looking for something that AI in healthcare claims to offer: clarity, speed and understanding.

Physicians on Sermo aren’t pushing back against AI because they fear irrelevance, but rather because they know what’s at stake when care is driven by convenience.3 While there’s space for AI to support education and engagement, it’s the physician who ensures that care remains rooted in clinical expertise, not algorithms. As one Pediatrician put it: “AI is a double-edged sword that can never fully replace human intelligence and in-person care.3” And right now, the challenge is making sure the handle stays in the right hands.

Join the conversation on Sermo

Physicians around the world are already discussing the future of AI in medicine and how it’s changing patient care. Want to share your perspective, learn from peers and access real-time insights from the frontlines of healthcare?

Join Sermo today and be part of the global physician community shaping what comes next.

Footnotes

  1. Sermo (2025) Poll of the Week: AI & Self-Diagnosis. Available at: https://app.sermo.com/feed/for-you/post/1406991/expanded (Accessed: 16 May 2025).
  2. Bajarin, T. (2025) Apple’s AI and the Next Era of Preventive Care Innovation. Forbes, 8 April. Available at: https://www.forbes.com/sites/timbajarin/2025/04/08/apples-ai-and-the-next-era-of-preventive-care-innovation/ (Accessed: 16 May 2025).
  3. Sermo member comment on: Sermo (2025) Poll of the Week: AI & Self-Diagnosis. Available at: https://app.sermo.com/feed/for-you/post/1406991/expanded (Accessed: 16 May 2025).