Skip to content

ABC Tool

  • Home
  • About / Contect
    • PRIVACY POLICY
In Harvard study, AI offered more accurate emergency room diagnoses than two human doctors

In Harvard study, AI offered more accurate emergency room diagnoses than two human doctors

Posted on May 3, 2026 By safdargal12 No Comments on In Harvard study, AI offered more accurate emergency room diagnoses than two human doctors
Blog


A new study examines how large language models perform in a variety of medical contexts, including real emergency room cases — where at least one model seemed to be more accurate than human doctors.

The study was published this week in Science and comes from a research team led by physicians and computer scientists at Harvard Medical School and Beth Israel Deaconess Medical Center. The researchers said they conducted a variety of experiments to measure how OpenAI’s models compared to human physicians.

In one experiment, researchers focused on 76 patients who came into the Beth Israel emergency room, comparing the diagnoses offered by two internal medicine attending physicians to those generated by OpenAI’s o1 and 4o models. These diagnoses were assessed by two other attending physicians, who did not know which ones came from humans and which came from AI.

“At each diagnostic touchpoint, o1 either performed nominally better than or on par with the two attending physicians and 4o,” the study said, adding that the differences “were especially pronounced at the first diagnostic touchpoint (initial ER triage), where there is the least information available about the patient and the most urgency to make the correct decision.”

In Harvard Medical School’s press release about the study, the researchers emphasized that they did not “pre-process the data at all” — the AI models were presented with the same information that was available in the electronic medical records at the time of each diagnosis. 

With that information, the o1 model managed to offer “the exact or very close diagnosis” in 67% of triage cases, compared to one physician who had the exact or close diagnosis 55% of the time, and to the other who hit the mark 50% of the time.

“We tested the AI model against virtually every benchmark, and it eclipsed both prior models and our physician baselines,” said Arjun Manrai, who heads an AI lab at Harvard Medical School and is one of the study’s lead authors, in the press release.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

To be clear, the study didn’t claim that AI is ready to make real life-or-death decisions in the emergency room. Instead, it said the findings show an “urgent need for prospective trials to evaluate these technologies in real-world patient care settings.”

The researchers also noted that they only studied how models performed when provided with text-based information, and that “existing studies suggest that current foundation models are more limited in reasoning over nontext inputs.”

Adam Rodman, a Beth Israel doctor who’s also one of the study’s lead authors, warned the Guardian that there’s “no formal framework right now for accountability” around AI diagnoses, and that patients still “want humans to guide them through life or death decisions [and] to guide them through challenging treatment decisions.”

In a post about the study, Kristen Panthagani, an emergency physician, said this is an “an interesting AI study that has led to some very overhyped headlines,” especially since it was comparing AI diagnoses to those from internal medicine physicians, not ER physicians.

“If we’re going to compare AI tools to physicians’ clinical ability, we should start by comparing to physicians who actually practice that specialty,” Panthagani said. “I would not be surprised if a LLM could beat a dermatologist at an neurosurgery board exam, [but] that’s not a particularly helpful thing to know.”

She also argued, “As an ER doctor seeing a patient for a first time, my primary goal is not to guess your ultimate diagnosis. My primary goal is to determine if you have a condition that could kill you.”

This post and headline have been updated to reflect the fact that the diagnoses in the study came from internal medicine attending physicians, and to include commentary from Kristen Panthagani.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.



Source link

Post Views: 1

Post navigation

❮ Previous Post: Action packed. – Latest News
Next Post: There's a lot of hype about Chinese EVs—is any of it true? ❯

You may also like

F1 Is One of the Loudest Sports on Earth. Here’s How to Protect Your Hearing at the Miami Grand Prix
Blog
F1 Is One of the Loudest Sports on Earth. Here’s How to Protect Your Hearing at the Miami Grand Prix
April 29, 2026
Tabloid reports linking 10 missing and dead scientists spur FBI probe
Blog
Tabloid reports linking 10 missing and dead scientists spur FBI probe
April 22, 2026
How to create a custom Take a Message greeting on your Pixel
Blog
How to create a custom Take a Message greeting on your Pixel
April 12, 2026
OpenAI announces new advanced security for ChatGPT accounts, including a partnership with Yubico
Blog
OpenAI announces new advanced security for ChatGPT accounts, including a partnership with Yubico
April 30, 2026

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent Posts

  • The WYBOT S3 is the world’s first robot pool cleaner that cleans, docks, and empties itself
  • Device provisioning guide: Process, security, best practices
  • Ruflo: Multi-agent AI orchestration for Claude Code
  • This Pixel 7 Pro’s $68 battery replacement turned into a $250 nightmare
  • The rumored 20th anniversary iPhone design will be applied to both Pro models

Recent Comments

No comments to show.

Archives

  • May 2026
  • April 2026

Categories

  • Blog

Copyright © 2026 ABC Tool.

Theme: Oceanly News by ScriptsTown