ChatGPT might give you bad medical advice, studies warn
Summary
ChatGPT might give you bad medical advice, studies warn March 11, 2026 11:21 AM ET By Katia Riddle As more people turn to chatbots for health advice, studies say they may be led astray Listen · 3:36 3:36 Transcript Toggle more options Download Embed Embed < iframe src="https://www.npr.org/player/embed/nx-s1-5744035/nx-s1-9671199" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player"> Transcript Andriy Onufriyenko/Moment RF/Getty Images As tech companies roll out platforms specifically designed for health care consultation, AI is rapidly becoming a key player in many people's medical decisions. In one example, it failed to direct a hypothetical patient with diabetic ketoacidosis and impending respiratory failure — a life-threatening condition — to go to the emergency department. "When there was a textbook medical emergency, ChatGPT got it right," said Girish Nadkarni, a doctor and AI researcher at Mount Sinai who is an author on the study. AI can improve a doctor's visit Despite concerns about inaccuracy, doctors who study AI believe there is value in patients using it for health care information, and point to times it has even provided lifesaving advice. "I encourage patients to use these tools," says Robert Wachter, a doctor at UC San Francisco and author of the recently published book, A Giant Leap: How AI Is Transforming Health Care and What That Means for Our Future . Studies show that when health care is treated more like a business or marketplace product, people trust doctors less. "What I hope is that this technology can be used in a way that enhances humanity in medicine," says Rodman "and not in a way that cuts out the doctor-patient relationship." chatgpt Facebook Flipboard Email
ChatGPT might give you bad medical advice, studies warn March 11, 2026 11:21 AM ET By Katia Riddle As more people turn to chatbots for health advice, studies say they may be led astray Listen · 3:36 3:36 Transcript Toggle more options Download Embed Embed < iframe src="https://www.npr.org/player/embed/nx-s1-5744035/nx-s1-9671199" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player"> Transcript Andriy Onufriyenko/Moment RF/Getty Images As tech companies roll out platforms specifically designed for health care consultation, AI is rapidly becoming a key player in many people's medical decisions. In one example, it failed to direct a hypothetical patient with diabetic ketoacidosis and impending respiratory failure — a life-threatening condition — to go to the emergency department. "When there was a textbook medical emergency, ChatGPT got it right," said Girish Nadkarni, a doctor and AI researcher at Mount Sinai who is an author on the study. AI can improve a doctor's visit Despite concerns about inaccuracy, doctors who study AI believe there is value in patients using it for health care information, and point to times it has even provided lifesaving advice. "I encourage patients to use these tools," says Robert Wachter, a doctor at UC San Francisco and author of the recently published book, A Giant Leap: How AI Is Transforming Health Care and What That Means for Our Future . Studies show that when health care is treated more like a business or marketplace product, people trust doctors less. "What I hope is that this technology can be used in a way that enhances humanity in medicine," says Rodman "and not in a way that cuts out the doctor-patient relationship." chatgpt Facebook Flipboard Email
## Article Content
ChatGPT might give you bad medical advice, studies warn
March 11, 2026
11:21 AM ET
By
Katia Riddle
As more people turn to chatbots for health advice, studies say they may be led astray
Listen
·
3:36
3:36
Transcript
Toggle more options
Download
Embed
Embed
<
iframe src="https://www.npr.org/player/embed/nx-s1-5744035/nx-s1-9671199" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
Transcript
Andriy Onufriyenko/Moment RF/Getty Images
As tech companies roll out platforms
specifically designed
for health care consultation, AI is rapidly becoming a key player in many people's medical decisions. According to OpenAI, the maker of ChatGPT,
more than 40 million
people consult the platform every day for health information.
But new research suggests AI may mislead users in certain medical scenarios.
One risk: While AI puts vast medical knowledge at your fingertips, many laypeople don't know how to harness it effectively. In
a study
published recently in the journal
Nature Medicine
, researchers tried to simulate how people use AI chatbots by giving participants medical scenarios and asking them to consult AI tools. After conversing with the bots, participants correctly identified the hypothetical condition only about a third of the time.
Only 43% made the correct decision about next steps, such as whether to go to the emergency room or stay home.
"People don't know what they are supposed to be telling the model," says Andrew Bean, who studies AI systems at Oxford University and was one of the authors on this study.
Shots - Health News
Therapy by chatbot? The promise and challenges in using AI for mental health
Bean says often when using AI, arriving at a helpful conclusion comes down to word choice. "Doctors are trained to ask you questions about symptoms you might not have realized you should have mentioned," says Bean.
In one scenario, two different users gave slightly different depictions of the same scenario. One of them described "the worst headache I've ever had," and was directed by the AI to go to the emergency room immediately. The other – who did not use that explicit description – was told to take aspirin and stay home. "Turns out this was actually a life-threatening condition," says Bean.
There are some instances when AI excels at identifying medical issues — in
some studies
, large language models have sometimes matched or even outperformed physicians on diagnostic reasoning tasks. But the way people use AI Chatbots, says Bean, is far more messy than the controlled, clinical situations in which it performs well.
Correct diagnosis, wrong advice
Even in circumstances where AI is able to correctly identify the condition, it often does not present the next steps with the appropriate amount of urgency, according to
another study
.
Researchers presented the AI bots with different medical scenarios. In 52%of emergency cases, the bots "under-triaged," meaning treated the ailment as less serious than it was. In one example, it failed to direct a hypothetical patient with diabetic ketoacidosis and impending respiratory failure — a life-threatening condition — to go to the emergency department.
"When there was a textbook medical emergency, ChatGPT got it right," said Girish Nadkarni, a doctor and AI researcher at Mount Sinai who is an author on the study. The problem, said Nadkarni, is when there were more complicated scenarios in which there was an "element of time" at play – the bot often both over- and under- estimated the amount of time a patient could wait until pursuing care.
A spokesperson from OpenAI said this study did not represent the way people actually use ChatGPT, and that the previous study used an older version of ChatGPT that the company argues has since been corrected for some of the concerns that surfaced.
AI can improve a doctor's visit
Despite concerns about inaccuracy, doctors who study AI believe there is value in patients using it for health care information, and point to times it has even provided
lifesaving
advice.
"I encourage patients to use these tools," says Robert Wachter, a doctor at UC San Francisco and author of the recently published book,
A Giant Leap: How AI Is Transforming Health Care and What That Means for Our Future
.
Wachter argues that with health care difficult to afford and access, consulting AI is still often better than the alternatives. "The advice you get from the tools is substantially better than nothing and better than what you would get from your second cousin," says Wachter.
Still, Wachter stresses, AI is not a replacement for a doctor.
Adam Rodman, a hospitalist who researches AI programs at Harvard Medical School, discourages people from using AI to triage emergency situations, but says AI can add significant value to a patient's interaction with a human medical practitioner.
"A good time to use a large language model is when you're about to go see a doctor — or after you see your doctor," says Rodman. It can help you be
---
## Expert Analysis
### Merits
- Adam Rodman, a hospitalist who researches AI programs at Harvard Medical School, discourages people from using AI to triage emergency situations, but says AI can add significant value to a patient's interaction with a human medical practitioner. "A good time to use a large language model is when you're about to go see a doctor — or after you see your doctor," says Rodman.
### Areas for Consideration
- One risk: While AI puts vast medical knowledge at your fingertips, many laypeople don't know how to harness it effectively.
- In one example, it failed to direct a hypothetical patient with diabetic ketoacidosis and impending respiratory failure — a life-threatening condition — to go to the emergency department. "When there was a textbook medical emergency, ChatGPT got it right," said Girish Nadkarni, a doctor and AI researcher at Mount Sinai who is an author on the study.
- The problem, said Nadkarni, is when there were more complicated scenarios in which there was an "element of time" at play – the bot often both over- and under- estimated the amount of time a patient could wait until pursuing care.
### Implications
- ChatGPT might give you bad medical advice, studies warn March 11, 2026 11:21 AM ET By Katia Riddle As more people turn to chatbots for health advice, studies say they may be led astray Listen · 3:36 3:36 Transcript Toggle more options Download Embed Embed < iframe src="https://www.npr.org/player/embed/nx-s1-5744035/nx-s1-9671199" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player"> Transcript Andriy Onufriyenko/Moment RF/Getty Images As tech companies roll out platforms specifically designed for health care consultation, AI is rapidly becoming a key player in many people's medical decisions.
- But new research suggests AI may mislead users in certain medical scenarios.
- The promise and challenges in using AI for mental health Bean says often when using AI, arriving at a helpful conclusion comes down to word choice. "Doctors are trained to ask you questions about symptoms you might not have realized you should have mentioned," says Bean.
- The problem, said Nadkarni, is when there were more complicated scenarios in which there was an "element of time" at play – the bot often both over- and under- estimated the amount of time a patient could wait until pursuing care.
### Expert Commentary
This article covers medical, health, doctor topics. Notable strengths include discussion of medical. Areas of concern are also raised. Readability: Flesch-Kincaid grade 0.0. Word count: 1026.
Related Articles
See the messages Brian Hooker sent his friend after wife's disappearance in...
3 days, 3 hours ago
Breaking down Artemis II's reentry process, heat shield's importance
3 days, 3 hours ago
Tracking traffic through the Strait of Hormuz
3 days, 3 hours ago
Israel issues new evacuation orders for Beirut suburbs
3 days, 3 hours ago