How OpenEvidence AI is transforming clinical decision-making

A red square with white "OE" text in the center, surrounded by five circles in black, white, red, and outline, on a peach background—symbolizing OpenEvidence AI's impact on clinical decision-making.

AI tools are edging their way into clinical practice, but physicians aren’t meeting them with uniform confidence. 

In a recent Sermo poll, 60% of doctors said they’d only vaguely heard of OpenEvidence AI, while 37% admitted they hadn’t heard of it at all. Yet when asked how they feel about AI’s role in decision-making, most expressed some level of openness: 20% are very supportive, 54% are cautiously open, while 21% fall somewhere between concerned and skeptical.

Comments from members capture this split. As one GP put it, “OpenEvidence AI is like having a top-tier research assistant… a massive time-saver… helping us be more efficient, precise, and stay updated, freeing us up for what’s most important: our patients.” But a psychiatrist raised the flip side: “Great in theory, but who is preparing the database? Who is checking… Are there biases? What are the legal ramifications?”

This article unpacks those tensions, using Sermo poll data and community insight to explore what physicians really think.

What is OpenEvidence?

OpenEvidence AI is built as a clinician-facing decision-support tool, designed to answer point-of-care questions with evidence-based recommendations. It pulls exclusively from trusted, peer-reviewed sources like PubMed and major guidelines, and was developed by Harvard and MIT researchers through the Mayo Clinic Platform Accelerate program.

Access is limited to verified clinicians, with the goal of making it faster and easier to review literature and see supporting citations. Early evaluations in primary care show that OpenEvidence delivers clear, relevant answers that tend to back up physician judgment rather than replace it.

So, how many doctors use OpenEvidence? Well, adoption is already rising. By mid-2025, over 40% of U.S. physicians reported using it daily, and it’s now embedded in more than 10,000 hospitals and medical centers. Among Sermo members, awareness looks similar: 60% say they’ve at least heard of it. But awareness doesn’t always mean adoption, and doctors are using it in different ways.

In our poll, 21% described it as a useful tool for supporting decisions, with one pediatric specialist noting, “Evidence-based Medicine is a tenet of our clinical acumen and OpenEvidence AI can make this accessible on the wards and at the bedside.” Another 24% said it saves time, with a pediatrician saying: “It streamlines clinical decisions with fast, evidence-based answers, improving accuracy and saving time.” Still, even supportive physicians add caveats, with one GP saying, “I frequently use OpenEvidence as a tool… but I will take an additional step to verify information if I am suspicious.” Others remain skeptical, with 13% calling it too new or untested: “AI should be used for information purposes only. Clinical judgment is the key to patient care,” states one oncologist.

For now, OpenEvidence is gaining ground because it promises efficiency and access, but trust is still conditional due to the many concerns physicians have regarding the technology.

How does OpenEvidence work?

OpenEvidence is not just a simple search engine, it’s a clinical decision-support platform that blends the functionality of a medical search engine with AI-driven synthesis and reasoning. Physicians can pose questions in natural language—for example, “What is the latest evidence on the use of SGLT2 inhibitors for heart failure in non-diabetic patients?”—and instead of producing a long list of links, the platform scans thousands of peer-reviewed studies, guidelines and systematic reviews to generate a concise, referenced summary.

The key distinction from tools like PubMed or Google Scholar is that OpenEvidence goes beyond literature retrieval. Rather than requiring the physician to sift through dozens of abstracts and manually interpret the evidence, it highlights the strength and direction of findings, surfaces points of consensus or controversy and, crucially, translates that evidence into actionable clinical suggestions. For instance, if current guidelines support a certain drug class in a given patient population, OpenEvidence will surface that recommendation directly, along with the supporting evidence base. The tool doesn’t just save time for physicians, it directly supports evidence-based decision-making at the point of care, offering a level of clinical guidance that traditional search engines were never designed to provide.

Another benefit is that unlike general-purpose AI chatbots such as ChatGPT or Gemini, which can generate plausible but unverified (or frequently, hallucinated) responses, OpenEvidence is trained exclusively on trusted medical literature and maintains transparent sourcing. Every statement is linked back to primary studies or guidelines, allowing physicians to validate the evidence before applying it. 

Clinicians are using OpenEvidence to identify evidence-backed treatment options and cross-check their decisions against clinical guidelines and trials, as described in a recent study.

Top concerns about OpenEvicence and clinical AI

For all the enthusiasm around OpenEvidence, most physicians on Sermo still approach it with caution. 

When asked what concerns they had about using AI systems like OpenEvidence in practice, 44% pointed to accuracy and the risk of misinformation, 19% highlighted the lack of physician oversight or explainability, 16% raised legal or liability risks, and 7% flagged patient trust and acceptance. Only 6% said they had no major concerns, while 8% admitted they hadn’t really considered the question yet. Together, these results show that the road to wider adoption depends less on adding new features and more on addressing the fundamental issues of trust, responsibility, and accountability.

Accuracy is non-negotiable. Physicians won’t adopt OpenEvidence at scale until they’re confident its outputs are both reliable and current. As one GP explained, “I have had instances of trying to use AI-generated information in patient care, but noticed inaccuracies and false or outdated information. More work to be done on validating and reviewing before I will use AI regularly in day-to-day practice.” Accuracy is about maintaining trust, and as another internal Medicine physician put it bluntly, “I don’t trust it because when it’s wrong, it’s annoyingly confident in its wrong answer.” Overconfidence in an inaccurate response can be more dangerous than uncertainty, because it risks persuading clinicians to act on flawed information.

Others worry not just about correctness, but about the volume of information OpenEvidence produces. AI’s ability to rapidly generate polished, well-written content creates its own risks. As one anesthesiologist warned, “Given the tendency for AI to find connections that may not have even a sound theoretical basis… there is an unknown risk of disinformation. The rapidity of information generation would also be an issue. So quantity and verifiable quality are concerns.” In other words, speed without rigorous filtering gives no value and simply adds to the noise.

Oversight and transparency come next. Nearly one in five physicians in our poll said the lack of physician oversight was a major issue. One GP says, “AI in medical practice is helpful; however, lack of physician oversight is still a concern.” Others stressed the importance of knowing not just what the tool says, but how it got there. As a stomatologist noted, “Its usefulness will depend on…the transparency of its sources and reasoning processes.” Doctors want the evidence chain visible, not buried behind a black box.

Liability is another sticking point. Even if OpenEvidence makes the right call 99% of the time, physicians know that when something goes wrong, the responsibility still rests with them. “If you let yourself be carried away by it and fail, you cannot defend yourself in court by saying that it was the artificial intelligence’s fault,” one GP explained. Another orthopedic surgeon spoke towards the same point, “The signature at the bottom of the diagnostic report is always MINE and I am responsible for medical-legal purposes.” These points highlight why the ability to defend medical practice legally remains a barrier to AI adoption, and until there are clear frameworks for who (or what) is liable, some physicians won’t touch it.

The perspectives of patients can’t be overlooked either. While only 7% of Sermo respondents flagged patient trust directly, the community’s comments show it weighs heavily on doctors’ minds. “I think it has the potential to be exploited by big pharma or racial bias, which they have already shown AI can do,” warned an ophthalmologist, which speaks to a broader point on linking patient trust to the integrity of the system itself. 

Another GP captured the human element in simpler terms: “The truth is that AI is promising, but it should never replace human warmth. As a tool, it is great, but not as a substitute for traditional medicine.” Indeed, patients need reassurance that clinical care remains grounded in human compassion.

Finally, a small minority, 6%, reported no major concerns at all. For them, the benefits already outweigh the risks. As one psychiatrist put it, “I am a huge fan of AI in my practice. It makes my job easier.” These voices show where the trajectory is heading, even if not all of their peers are yet ready to follow.

Taken together, the data suggests that physicians see potential in OpenEvidence, but they need proof that it won’t expose them to risks or erode patient trust. Until the legal foundations are in place, OpenEvidence will simply not be used to its fullest extent, or as with some doctors – not at all.

What would encourage more physicians to adopt the technology?

If accuracy and accountability are the biggest barriers, then what would actually tip the balance toward adoption? Sermo poll data points to a straightforward answer: proof and trust. The top request from physicians was peer-reviewed validation, with 37% saying it’s the factor that would encourage them most. As one internist put it, “It could be used as a tool once the accuracy has been verified.” Until reliability is confirmed in real-world studies, many doctors aren’t ready to lean on it day to day.

Close behind is institutional endorsement (21%). Physicians want hospitals, medical societies or respected KOLs to stand behind the tool. One radiologist summed it up well: “The validity and reliability of the tool needs to be verified and endorsed by trusted institutions and key opinion leaders in multiple clinical settings before being widely adopted.” Endorsements like these help to share the responsibility of entrusting AI.

EHR integration is next (17%), and for good reason. As one GP noted, “OpenEvidence integrates with electronic health records, suggesting relevant research and potential diagnoses based on patient information.” Embedding the tool into existing workflows is what makes it genuinely useful at the point of care.

Time-saving features (16%) also matter, but doctors don’t want speed to come at the expense of accuracy or oversight. And while a small minority (8%) remain firmly opposed, they appear to be the exception. 

For most physicians, the path forward with AI is simple: demonstrate accuracy, build trust through validation and endorsements and make the tool easy to use inside everyday systems.

Your takeaway

Physicians on Sermo see OpenEvidence AI as promising but not yet proven. Its greatest strengths, like speed, access to evidence and support for decision-making, are tempered by concerns about its accuracy and who should be liable if it gets things wrong.

Most doctors don’t view it as a replacement for clinical judgment, but rather as a supplement that could save time and reduce their thinking load if validated and integrated responsibly. 

With peer-reviewed validation, institutional backing, and seamless workflow integration, opinions on OpenEvidence could shift from cautious curiosity to a trusted, everyday tool for doctors.

Join the conversation on Sermo

How are you using AI tools like OpenEvidence in practice, or why are you holding back? Share your experience and learn from your peers on Sermo.