
Artificial intelligence has arrived in medicine with promises to revolutionize patient care, reduce physician burnout and improve diagnostic accuracy. But as physicians increasingly rely on the tools, it’s unclear whether they’re actually diminishing the very skills they’re meant to support.
Recent data from Sermo’s global community reveals that some physicians themselves are concerned about AI adoption in healthcare. In a poll, members indicated concern around potential negative consequences—including reduced vigilance or increased automation bias (22%), deskilling of new physicians (22%) and erosion of clinical judgment and empathy (22%).
Some members even believe their skills have already started to diminish from AI use. “At first AI felt like the perfect colleague,” recounts a surgical oncologist. “But I noticed I stopped second-guessing and my diagnostic muscles dulled. If we’re not careful, we’ll wake up as glorified button-pushers.”
To this physician’s point, successful AI integration will require a thoughtful balance. It’s about embracing the future without forgetting what makes physicians indispensable.
Is AI in medicine a promise or peril?
Healthcare’s relationship with artificial intelligence began decades ago, but recent advances in machine learning and natural language processing have accelerated adoption. Electronic health records (EHR) introduced basic decision support systems in the 1990s, but today’s AI tools can interpret medical imaging, predict patient deterioration and even generate clinical documentation in real-time.
Current usage patterns among Sermo members reveal adoption across various tools. According to a survey of over 2,000 global physicians, 23% currently use AI for clinical scribing, while another 23% employ it as a diagnostic aid (note: respondents could select more than one option). Risk estimation tools claim 13% of users, and 17% leverage AI for non-clinical administrative tasks.
Members’ perspectives on AI’s impact on patient outcomes remain cautiously optimistic. While 15% believe AI will lead to much better patient outcomes and 54% expect somewhat better results, 11% see no difference.
Sermo members have a largely positive outlook on the potential for AI tools to help address burnout. With burnout affecting nearly half of U.S. physicians according to recent surveys, 66% of polled Sermo members agree that AI can meaningfully reduce administrative burden. That said, AI can only go so far, one internal medicine physician believes. “AI scribing can reduce documentation burden and analyze trends, but it must never replace history-taking, physical exams, or patient connection,” they shared.
Deskilling and doctor automation bias
The concept of “deskilling” in healthcare refers to the “gradual erosion of clinical skills and expertise” that can occur when physicians become overly dependent on automated systems, according to a 2024 study. This phenomenon isn’t unique to medicine—for example, pilots have experienced similar concerns with autopilot systems.
Research suggests that deskilling is a real possibility. One study found that after physicians started using AI-assisted polyp detection, their unassisted detection rates declined.
Automation bias represents another significant concern. The psychological phenomenon occurs when humans over-rely on automated systems, accepting their outputs without sufficient critical evaluation. In aviation, automation bias has contributed to accidents when pilots failed to recognize and correct system errors.
The medical implications are equally serious. “AI is an invaluable co-pilot, but physicians must remain the pilot in command to avoid losing core clinical judgment,” urges a radiation oncologist on Sermo.
Given the risks, some physicians are in favor of proceeding with caution. “AI is a tool, and every tool can be misused—whether through clumsiness or by those with harmful intent,” states a dermatology and surgical oncology physician on Sermo. “Physicians should limit use to documentation until patient-care applications are proven safe.”
The benefits of AI for doctor training, patient trust and transparency
Amid deskilling concerns, AI technology also has potential benefits. Radiology AI has demonstrated remarkable accuracy in detecting certain cancers, sometimes exceeding human performance. Pathology AI helps identify cellular abnormalities that human eyes might miss.
For medical training, AI presents unprecedented learning opportunities. Radiology residents can practice interpreting cases with immediate AI feedback, accelerating their diagnostic development. Surgical trainees can use AI-powered simulators that adapt to their skill level, providing personalized education impossible in traditional settings.
Patients may appreciate when physicians explain how AI assists their care, viewing it as an additional safety measure rather than a replacement for human judgment. Modern AI systems increasingly provide explainable outputs, showing which features influenced their recommendations. This transparency allows physicians to evaluate AI suggestions critically and explain reasoning to patients. When physicians are clear that they use AI as a fact-checking tool and not for initial diagnosis, it can help enforce patient trust.
Some physicians have emphasized the importance of using AI tools carefully, to achieve their benefits without over-relying on them. “Physician training and peer reviews must adapt,” writes a family medicine physician on Sermo. “Like UpToDate, AI isn’t a substitute for judgment or expertise.”
Others have called for regulations and ethical guidelines around AI use. “AI can improve diagnostic accuracy and efficiency but over-reliance risks deskilling and loss of patient trust,” states an anesthesiology and critical care specialist on Sermo. “Ethical guidelines are essential.”
In a 2024 survey conducted by the American Medical Association (AMA), participants indicated that increased oversight was the top regulatory action that would increase physician adoption of AI tools. In a report from the same year, the AMA noted that no whole-of-government strategy for oversight and regulation exists in the U.S., but that the U.S. Department of Health and Human Services (HHS) has developed a general strategy for trustworthy AI use.
Physician strategies to prevent deskilling
If you use AI in your practice, you can take measures to prevent deskilling, and by extension, maintain patient trust. These are some strategies that preserve clinical reasoning while leveraging technological benefits:
Critical questioning
Critical questioning can help prevent automation bias. Before accepting any AI recommendation, ask: Why did the AI suggest this? Does this align with established guidelines, my clinical experience and this patient’s specific context? This approach treats AI as a sophisticated second opinion rather than an authoritative directive.
Continued skill practice
Live by the “use it or lose it” principle. Even when AI can automate certain tasks, physicians can benefit from regularly performing core skills manually. This might mean interpreting some imaging studies without AI assistance or conducting physical exams before reviewing AI-generated assessments.
Continuous education
You can combat deskilling by understanding how AI systems work and their limitations. Continuing Medical Education (CME) programs increasingly offer AI-focused content, but you’ll need to actively seek these opportunities. Topics include understanding algorithm bias, recognizing system limitations and maintaining clinical skills in AI-augmented environments.
Independent diagnosis formation
Independent diagnosis formation before consulting AI recommendations preserves clinical reasoning skills. This approach involves developing a differential diagnosis, treatment plan or interpretation before reviewing AI suggestions. The AI then serves as a check on your thinking rather than a replacement for it.
Peer consultation
You can maintain the human element in medical decision-making through peer consultation. Complex cases benefit from colleague discussions that AI cannot replicate.
Communities like Sermo facilitate these peer interactions, allowing members to discuss challenging cases and share experiences with AI tools. “AI is a valuable adjunct, but dependence could cost us our clinical skills,” a family medicine physician member notes. “These are scary times, and talking with colleagues helps me navigate them thoughtfully.”
Workflow policies
Build safeguards into daily practice. These might include requirements for independent verification of AI-driven results, scheduled periods of non-AI practice or mandatory peer review of AI-assisted decisions in complex cases. You can look to existing frameworks for guidance around selecting AI models and implementing them ethically.
Balancing care efficiency with the art of medicine
The integration of AI into medical practice creates tension between efficiency gains and the preservation of medicine’s human elements. While AI can reduce documentation time and streamline administrative tasks, a risk is that physicians lose the empathy, intuition and personal connection that define excellent care.
AI systems, no matter how sophisticated, cannot replicate the emotional intelligence required for effective patient care. A psychiatrist on Sermo highlighted the concern. “AI scribes are cookie-cutter substitutes that risk eroding doctors’ ability to think critically while adding costs, privacy concerns, and error risks.”
That said, physicians can use practical implementation strategies to help preserve human connection without missing out on the benefits. Some physicians use selective AI deployment. Rather than implementing AI across all practice areas simultaneously, physicians can choose specific applications where benefits clearly outweigh risks.
Emergency medicine physicians, for example, might use AI for triage documentation while maintaining traditional approaches for patient communication and clinical decision-making. This selective strategy allows them to experience AI benefits while preserving skills in critical areas.
Physicians can simultaneously maintain trust through patient education about AI. Patients tend to appreciate transparency about how technology assists their care, especially when physicians frame AI as an additional safety measure.
The challenge extends beyond individual physician choices to encompass healthcare system policies and incentives. Organizations can support physicians in using AI to enhance rather than replace human connection, recognizing that some inefficiencies in healthcare serve important relational purposes. This could mean designing workflows that intentionally leave space for direct patient interaction, or prioritizing training programs that help clinicians integrate AI responsibly.
Your takeaway
Physicians believe that AI tools offer significant potential benefits while posing risks to clinical skills and patient relationships, Sermo data shows. Thoughtful integration may help preserve medicine’s human essence while leveraging technology’s power to improve care and reduce burnout.
Rather than passively accepting AI recommendations, you can actively engage with these tools, understanding their capabilities and limitations. You can seek training and education that’ll teach you how to use AI tools and evaluate their outputs critically, and maintain transparency with your patients on how and when AI is used in your practice.
The Sermo community is already playing a role in facilitating peer discussions about AI implementation. There, you can share experiences, discuss challenges and develop best practices in collaboration with other physicians. These conversations can help the medical community learn collectively from lived experiences rather than individually facing AI’s challenges.
The future of medicine could involve AI tools becoming as commonplace as EHR or digital imaging systems. The question isn’t whether physicians will use AI, but how they’ll integrate the tools while preserving clinical judgment, empathy and human connection.
As one general practice and emergency medicine physician concluded: “AI must complement, not replace, medical training and human judgment.”
No, AI cannot completely replace doctors. Medicine’s human elements, including empathy, intuition and personal connection, can’t be replicated by technology.
AI may help physicians detect abnormalities that they may otherwise miss. It may also help reduce physician burnout, which could likewise lead to better healthcare outcomes.
AI could contribute to deskilling or automation bias if physicians don’t use the tools prudently.
Physicians can prevent deskilling by using critical judgment, continuing to practice manual skills, continuing their education, forming diagnoses independent of AI assistance, using peer consultation, and adopting workflow policies as safeguards.
Some of the most promising existing tools include radiology AI that can detect certain cancers, pathology AI that helps identify cellular abnormalities, and simulators used in training.






