Blog  /  Insights

How doctors really feel about AI in medicine

complexities in AI

The potential of AI to benefit humanity may be as great as the potential to destroy it, believe many prominent technology experts, futurists, and investors. Physicians’ opinions of AI in medicine hover in similar territory.  

The large language model GPT-4 that was recently released has immense potential to transform medicine. ChatGPT can help physicians make diagnoses, decide on treatment plans, and ensure evidence-based decision making. The technology can also unburden physicians from time-consuming admin work and note taking.  

However, in an open letter that was recently posted on the non-profit website Future of Life Institute, Billionaire Elon Musk, Apple co-founder Steve Wozniak and former presidential candidate Andrew Yang joined hundreds of others calling for a six-month pause on AI experiments or we could face “profound risks to society and humanity.” 

In two recent Sermo polls, physicians from around the world weighed in on the topic of AI in medicine and what it means for their practices. Here’s how they responded: 

Believe AI can perform better than many doctors in a clinical setting
51%
Particularly compelled that AI can perform in a way that human physicians currently cannot—such as immediately being able provide a correct genomic diagnosis for a condition that occurs in one in a million people
79%
Excited about the potential for GPT-4 to transform medicine.
70%
Concerned by AI’s lack of true empathy when it comes to clinical settings
83%
Concerned that for ChatGPT to be truly effective in medicine, the amount of training required may not be feasible for most doctors, and in fact, the neural network is so large that only a handful or organizations have enough computing power to train it
87%
Agree that a moratorium, pause, and/or legislation on AI needs to be implemented worldwide for it to be safe and effective
88%
Concerned that we are not ready for this level of AI
89%
Agree with the open letter that is calling on a 6-month pause of AI experiments, or we could face profound risks to society and humanity
84%
Concerned that ChatGPT could flood social media with phony articles that sound professional, or bury Congress with ‘grassroots’ letters that sound authentic
93%
Believe it is essential that lawmakers create AI governance systems
88%

Here is more of what Sermo physicians have to say on this topic—their medical perspectives and opinions—in their own words: 

Many people who have been experimenting with ChatGPT, for example, have found it has a tendency to “hallucinate” or just flat make up answers when it is uncertain. The most success they experienced was when they had a pretty good idea of what they were asking before they asked, and used the AI for details, such as calculation. I don’t think it will be useful to have AI “hallucinate” answers when uncertain in medical applications. Also – who is liable when AI gets something wrong?” 

Neurology, U.S.

AI may do better in an inpatient setting, but to translate a patient’s vague description of symptoms into a working diagnosis and differentiating nuances of visuals, such as rashes and lesions is a level that at this point only a trained human physician can perform.”

Family Medicine, U.S. 

If you thought that the internet changed our lives quickly, you ain’t seen nothin yet – the “AI” tools are going faster and being applied in more places that we can easily imagine.”

Internal Medicine, U.S. 

“Great potential but a little scary.”

Rheumatology, U.S. 

“It’s a tricky subject, I’m from a generation that grew up reading the great sci-fi classics Asimov, Philip K. Dick, Ian McEwan, Aldous Huxley, the generation that knew about The Matrix, The Terminator, so it’s all suspicious to me really. It can be very good, but if something looks very good, it doesn’t mean that it is, does it? Maybe in environments where doctors are scarce it is good, I don’t know, I don’t think they can replace the human brain. In 2016, Satya Nadella, CEO of Microsoft, stated in an interview these laws: “AI must be designed to help humanity”, which means that human autonomy must be respected. 
“AI must be transparent,” which means that humans must know and be able to understand how they work. 
“AI must maximize efficiency without destroying people’s dignity.” 
“AI must be designed for intelligent privacy,” which means trust is earned by protecting your information. 
“AI must have algorithmic accountability so that humans can undo unintended damage.” 
“AI must protect itself from prejudice” so as not to discriminate against people. In this technological world it would be good to know what doctors really think, before passing them over.”

Neurology, Cuba 

“AI is certainly a great advance that benefits medicine, but it cannot be absolute, because in irresponsible and unscrupulous hands it can be destructive.”

Stomatology, Cuba 

“I am excited about its potential to assist doctors in practice, recognizing changes in conditions more quickly and accurately, but I worry about it causing incomplete decision activities by the doctor. It’s similar to children not learning their multiplication and division tables because they rely on a computer. I am afraid that it will become a question of “disuse atrophy” We need both to maximize the return.”

Hematology Oncology, U.S. 

“Unless the AI can question the patient directly, the AI is at the mercy of the history obtained by medical students, interns, residents and attendings. As an allergist, I have seen hundreds of patients misdiagnosed by their primary care physician who failed to get an adequate history. Almost no primary care physician gets an adequate environmental history. So the AI makes a diagnosis and therapy based on inadequate data.”

Allergy & Immunology, U.S.

“Does make me worry about where the onus will lie if AI misdiagnosed or missed a diagnosis……”

Traumatology, U.K. 

“It is the understanding problem that I have difficulties with. The treatment and management of an individual patient may not be in line with the general trend for similar patients and this must be recognised. I an not sure that AI is up to this – yet!”

Urology, U.K. 

“As a complement: maybe OK, as a substitution: DEFINITE NO! Medicine is not only science, more really an artform.”

G.P., Sweden 

“Personally, I can say that I have been using AI, and so far I have been getting good results, but not so specific based on management guides and evidence-based medicine, but I recommend it, to give super-quick reviews of day-to-day issues.”

G.P., Peru 

“This trend that started years ago has been increased by the fact that by 2025 AI systems are expected to be able to respond independently to specific questions from patients, especially after the health crisis. In this way, health can evolve into a completely personalized management.”

Stomatology, Cuba 

“This will give the huge hospital systems another advantage and put smaller rural hospitals out of business, which is another reason why health shouldn’t be a business endeavor.”

Rheumatology, U.S.

“Artificial intelligence has the potential to revolutionize medicine and improve the detection, diagnosis and treatment of breast cancer. Some of the potential consequences of using AI in medicine include early detection. AI can help detect breast cancer at its earliest stages, which would increase survival rates and improve treatment outcomes. Accurate diagnosis artificial intelligence can help doctors make more accurate and personalized diagnoses, allowing patients to receive the right treatment for their specific type of cancer. Improving efficiency artificial intelligence can help doctors make faster decisions and accurate, which would reduce waiting time for patients and improve the efficiency of the health system. Error reduction artificial intelligence can help reduce human errors in image interpretation and decision making, which would improve the quality of healthcare and reduce risks for patients. Treatment personalization artificial intelligence can help doctors personalize treatment for each patient, based on their medical history and case-specific data. In summary, the use of artificial intelligence in medicine can have important positive consequences for detection, diagnosis and treatment of breast cancer, which could significantly improve the quality of life of patients. However, it is important to ensure that it is used ethically and responsibly, and that the privacy and security of patient data is taken into account.”

Geriatric Medicine, Cuba

“This can be especially helpful in the early identification of chronic diseases, such as diabetes and cardiovascular disease. However, it is important to remember that AI cannot completely replace doctors. Medicine is a complex discipline that requires skills and knowledge that go beyond the capabilities of AI. Doctors have the ability to interact and communicate with patients, which can be crucial in identifying symptoms and understanding a patient’s medical history. In addition, doctors can apply their clinical judgment and experience in the diagnosis and treatment of diseases, which AI cannot replicate…”

Stomatology, Cuba

Everyday thousands of Sermo member physicians from diverse backgrounds and experiences exchange knowledge with each other. Sermo is the original medical social network that empowers today’s physicians. Over 1 million fully verified physicians across more than 150 countries come to our platform to talk with peers, participate in paid medical studies, solve challenging patient cases, contribute to the world’s largest database of drug ratings – and enjoy a few laughs along the way.  

Interested in more? Check back any time and follow us on Facebook, Twitter, and LinkedIn for the latest and greatest in physician insights.  

Are you a physician or healthcare practitioner?  

Explore the many benefits of Sermo’s medical community and join in on all the exciting conversations when you sign up for free today.