
50% of physicians on Sermo say their patients mention using AI tools like ChatGPT or AI symptom checker apps at least occasionally before a visit. Almost as many, 47%, say it rarely or never happens.
That split statistic captures the emerging reality of AI in healthcare: for some patients, these tools are already shaping how they understand their symptoms and prepare for appointments. For others, the technology hasn’t entered the picture at all. As a result, physicians are navigating two parallel clinical landscapes: one where AI is an active part of the conversation and one where it’s entirely absent.
This new dynamic brings both opportunities and tensions. As one GP observed, “Unfortunately, patients frequently come to the office with the diagnosis already made.” For some, AI use has already begun to shape not only how patients present their symptoms but also how they listen to medical advice.
Yet others in the Sermo community are more optimistic. “It’s a useful and promising tool that we shouldn’t give up on,” said one family medicine physician. “It’s important that our patients know how to use it and understand its limitations.”
How does this impact day-to-day practice? How are physicians responding to AI-informed patients? And what strategies are they using to keep those conversations productive? Using Sermo poll data and member commentary, this article explores the emerging clinical conversation around AI and the role doctors are playing in shaping it.
What kinds of patients are using AI symptom checkers?
The perception that AI tools are primarily used by younger, digitally fluent patients is supported by physician opinion: 58% of physicians on Sermo stated that this demographic is most likely to engage with AI for medical queries. As one intensive care physician put it, “AI is increasing the level of knowledge on diseases and treatments for many people, especially young people.” But this trend isn’t universal. Another physician noted, “My patient population is not very tech savvy.”
The most commonly used AI tools by patients
According to a recent survey, 35% of Americans use AI to learn about or manage their health. Below is a breakdown of some of the most common AI tools and the potential patient usage, although none of these tools are designed or approved for clinical diagnosis or advice:
1. ChatGPT (OpenAI)
Patients may use ChatGPT to self-diagnose symptoms, research medications, or prep for appointments with lists of questions. Some may even ask it to interpret lab results. ChatGPT is helpful for information and education but is not approved for diagnosis and can make errors or offer incomplete guidance.
Frequent users: Ages 18–45, predominantly U.S., UK, Canada
Traits: Digitally literate, urban, often tech or adjacent fields
2. Gemini (Google)
Integrated with Google Search, patients may use Gemini for natural-language health queries, comparing conditions, or checking symptoms with AI-powered explanations and visuals. Gemini powers many new “AI overview” responses to health queries, currently rolling out in select markets and still under validation for accuracy.
Frequent users: Ages 20–50. Global reach, especially U.S., India, and English-speaking populations
Traits: Often Google users, smartphone-first, sometimes using Android health integrations
3. Perplexity AI
Patients may use Perplexity as an AI-powered search engine to ask focused questions and get cited sources on symptoms, treatments, or prognosis. It is gaining traction as a citation-focused research assistant for patients seeking evidence-backed health information and clear explanations. Its strength is in providing answers with direct source attribution, compared to a more conversational AI program.
Frequent users: Ages 25–55, especially students, researchers, or professionals in the U.S. and Europe
Traits: Information-savvy, citation-conscious, patients prefer clarity and source validation
4. Meta AI
Embedded into Messenger, WhatsApp, and Instagram, Meta AI may be used by patients to ask general wellness questions, search for symptoms, or get quick advice-style answers—often informally or out of curiosity. It’s one of the most accessible chatbots due to integration within popular social platforms.
Frequent users: Ages 16–35 with a high uptake in North America, Latin America, India
Traits: Social media-native, casual users, sometimes low health literacy, use it more passively or reactively.
When AI challenges physician authority
Tech fluency isn’t the only driver of AI use. The second largest group, according to 15% of respondents, includes patients who distrust traditional healthcare systems. These individuals often arrive with their minds already made up, seeking confirmation rather than consultation. One OB-GYN shared, “Most of my patients who used AI tools are people who don’t trust the initial evaluation of physicians… they go to a specialist just to get the treatment AI gave them.”
That lack of trust can quickly erode the doctor-patient relationship. A pathologist on Sermo recounted a case where, despite a benign pathology report, the patient insisted on further cancer testing because of what an AI had suggested. This case highlights the need for physicians to be empathetic and guide patients in their use of AI to prevent misinterpretation or overreliance on potentially flawed outputs.
Then there are the patients who aren’t skeptical, but persistent, chronic condition sufferers using AI medical advice to seek second opinions, pitting AI vs. doctor diagnosis against each other. “I now frequently encounter patients who have developed some sort of perspective from an AI-based platform,” said a radiologist. “It becomes difficult because clinical discussions … become more of a conceptual debate.”
These debates can deepen patient engagement but also complicate time-limited consultations. As one pediatrician warned, “It is too soon to rely on this kind of basis to determine diagnosis and prognosis. The information should always be given by an expert and not a machine.”
Patient usage of AI is not universal
Still, not all physicians see a clear pattern. Roughly 9% reported no noticeable trend, pointing instead to sporadic mentions from a wide mix of patients. One GP said, “AI use by patients is on the increase. Sometimes it causes undue anxiety and worries… it creates doubts even after a professional diagnosis.”
Finally, some patients use AI not out of preference, but necessity. For those in remote or underserved areas, AI can serve as a digital triage tool. One otolaryngologist described a case where a rural patient used an AI tool to flag potential acute rhinosinusitis in their child. The diagnosis was accurate and led to timely care. However, not all physicians are on board. Another GP gave the counterargument that, “AI is really nothing that should be used for self-diagnosis. I strongly disapprove.”
What emerges is a layered insight into who uses AI the most. While age and digital literacy may be the clearest predictors of AI use, as you’d expect, what’s surprising is that trust in physicians and access to care are also big indicators. AI hasn’t yet replaced traditional search engines like Google, but it is gaining ground in some rather unexpected ways.
Concerns around AI-led self-diagnosis
When asked about their top concerns regarding patients who rely on AI tools for medical advice, nearly half of Sermo physicians (46%) pointed to the risk of misdiagnosis or delays to care. In practice, this concern often stems from patients acting too quickly, or too confidently, on flawed advice. “It is very risky to leave patients the possibility of self-diagnosis,” said a pediatrician, highlighting the danger when AI outputs are accepted at face value without clinical oversight. For many doctors, the worry isn’t simply that AI might be wrong, but that patients may not know when it is.
A further 24% focused on what AI tools often lack: clinical nuance. Even when an AI-generated explanation sounds plausible, it rarely accounts for the subtle variables that shape real-world decision-making. Overlapping symptoms, evolving case histories, complex co-morbidities, or even body language—all of it plays a role in forming a complete picture. As one GP put it, “AI can be a powerful tool, but there are inaccuracies. Clinical acumen and examination are needed.” This effectively means that AI diagnosis in healthcare might describe the textbook version of a problem, but doctors are working with people, not textbooks.
Use of AI tools may impact patient interaction
Others in the Sermo community raised concerns about how these AI tools are reshaping their interactions with patients. When a patient walks in with AI-generated advice already in hand, the conversation can quickly become the doctor interpreting what AI has written. As one anesthesiologist noted: “It’s essential to verify and interpret their advice carefully. Building trust and clear communication remains key in combining tech with traditional care.” In these cases, physicians work hard to ensure AI isn’t just a disruptive force in their consultation room.
Ultimately, while the severity and type of concern varied, most physicians agreed on one thing: AI can’t replace the interpretive and experiential dimensions of medical care. They worry less about the tech itself and more about how patients understand and act on it.
Practical strategies for responding to AI-informed patients
Despite the concerns, most Sermo members are not rejecting AI outright. Instead, they’re exploring different roles they can play to keep patient use of AI safe and constructive.
Proactively recommend and contextualize trusted tools
35% of physicians say they now proactively recommend trusted tools and explain their limitations. As one GP commented on Sermo, “AI can be [beneficial], but as medical professionals, we should educate our patients on its limits and that this does not replace a one-on-one patient attention.”
By guiding patients toward vetted tools and setting realistic expectations, physicians help shape more grounded AI engagement.
Tip for talking to patients: Consider your patient’s background and knowledge of AI tools, E.g. “Have you used any tools or websites to look into your symptoms before coming in today?”
Engage reactively when patients bring up AI
Others prefer a reactive approach: 28% said they only engage when patients bring it up. This may reflect time pressures or a desire to avoid overvalidating tools with known shortcomings. Still, many agree that when AI use does surface, it’s better addressed than ignored.
Tip for talking to patients: Acknowledge the patient’s concerns, and compare the AI findings with your clinical opinion based on the history. E.g. “I appreciate you looking into this before coming in. Let’s take a look at what it gave you and talk through how it compares to what I’m seeing today.”
Refer patients to reliable institutional sources
Another 17% of surveyed physicians on Sermo advocate referring patients to institutional or evidence-backed sources. That approach gives patients the autonomy to explore AI tools while ensuring the content they engage with meets basic standards of clinical reliability.
“It can be dangerous or beneficial, as always. The doctor must accept this situation and would be wise to recommend reliable resources,” explained an internal medicine physician, “The patient should lose their fear of telling the doctor that they have used these means of information, and both should be able to evaluate them together. Extreme trust is required, and the use of this resource should not be the result of mistrust in the doctor. There is work for everyone if we want to optimize all these new resources.”
Tip for talking to patients: Recommend evidence-based resources that support your advice, E.g. “Some AI tools oversimplify or get things wrong. Here’s where you can read more about this”
Stay informed to support meaningful conversations
Several physicians emphasized the importance of staying current. Keeping up with AI trends allows physicians to correct misinformation and deepen trust-based dialogue. One family physician noted, “I believe it’s important to keep ourselves informed as patients will use these tools more and more, and we need to be prepared to answer the questions generated by the use of AI. I personally enjoy being a little challenged, although it takes more time to justify why you won’t prescribe a specific exam.”
Balance caution with the potential to support care
One physician echoed a sentiment of balance: “AI in medical diagnosis offers both promise and caution… When used responsibly, AI can be a powerful tool to support—not replace—clinical judgment.” This mindset acknowledges both the risk and potential of AI, while reinforcing the physician’s role as the final authority on patient care.
Tip for talking to patients: Ensure the patient uses AI to support, not replace clinical judgement. E.g. “There’s promise in these tools, especially for increasing awareness. But they still need to be interpreted by someone who knows your whole clinical picture.”
Adaptation is essential
So, is there a consensus? Not exactly. It seems to be only that AI isn’t going away. Patients will continue to use it and physicians will need to adapt. Whether that means engaging proactively, reactively, or offering balanced guidance, physicians remain the crucial bridge between digital advice and real-world care.
Navigating the future together
Physicians don’t all agree on one singular view of how AI is impacting healthcare. However, they do all have opinions on the subject. Of the surveyed physicians on Sermo, only 9% said they were strongly supportive of AI-powered health apps. The majority (42%) expressed cautious optimism, while 22% expressed concern. Six percent were strongly opposed.
Can AI provide medical advice in a way that benefits both patients and physicians?
First, AI is already reshaping how patients and physicians interact. Half of physicians say patients are bringing it up. Which means that whether physicians are ready for AI or not, it’s likely already in their consultation rooms.
Second, most physicians are open to AI as long as it stays in its lane. In general, doctors don’t fear the technology, but they do fear what patients might do with it when left unguided.
And third, the key difference between harm and help lies in how the conversation unfolds. The physicians who respond with context and empathy are the ones most likely to keep the doctor-patient relationship intact, even when AI gets it wrong.
AI symptom checkers are very unlikely to be just a passing trend. Instead, what’s most likely is that they’re just the next layer added to an already multifaceted doctor-patient relationship. And while that relationship may become more complex due to AI, there’s a chance it might also become more timely and accessible, too.
“AI has the ability to revolutionize medicine and health care practice. However, it should be used with caution!” – a general practice physician on Sermo
Your actionable takeaways:
- Prepare for AI to show up in your clinic, as patients are already bringing it into the conversation.
- Guide patients in their use of AI to prevent misinterpretation or overreliance on flawed outputs.
- Maintain the doctor–patient relationship by responding to AI-informed questions with context and empathy.
- View AI symptom checkers as a permanent part of patient behavior, not a passing trend— and adapt accordingly.
- Focus on communication over confrontation to ensure AI enhances, rather than undermines, clinical care.