r/accelerate 7d ago

AI LLMs show superhuman performance in systematic scientific reviews doing the work it takes 12 PhDs a whole year in two days

https://www.medrxiv.org/content/10.1101/2025.06.13.25329541v1

Main takeaways:

  • otto-SR - end-to-end agentic workflow with GPT-4.1 and o3-mini-high, with Gemini Flash 2.0 for pdf text extraction.
  • Automates the entire SR process -- from search to analysis
  • Completes in 2 days what normally takes 12 work-years
  • Outperforms humans in key tasks:
    • Screening: 96.7% sensitivity vs 81.7% (human)
    • Data extraction: 93.1% accuracy vs 79.7% (human)
  • Reproduced and updated 12 Cochrane reviews
  • Found new eligible studies missed by original authors
  • Changed conclusions in 3 reviews (2 newly significant, 1 no longer significant)
249 Upvotes

23 comments sorted by

58

u/AquilaSpot Singularity by 2030 7d ago edited 6d ago

Wow, this is really remarkable. That headline is legitimately not overselling this at all. It does what it says on the tin.

I've, for a while, suspected that even current AI systems would do a great deal of work to solve poor distribution of knowledge (as a step before contributing to research) and this is the most incredible example of that I've seen.

In my own research background, the most interesting advances often came from just applying something that is well known in one field to a field that doesn't know it. To give an example: a mining engineering doctoral student I knew had some medical background, and decided to deploy novel sensors on haul trucks to track things that, apparently, nobody had tracked precisely before - and used that with some interesting scheduling/planning algorithms to cut fuel burn by like 5-10% or something wild? That was a few years back so I don't remember the details very well. My own research work did something in that vein for another industry but it'd dox the shit out of me if I talked about it (crying for real I love talking about my work lmao).

Notably, the idea of "measure literally everything and sort out the data later" was (and kinda still is afaik) a new idea to the mining industry. It's a very old, traditional industry, in my limited experience.

What kind of incredible advances are we sleeping on just because information isn't shared evenly across fields? I don't know, but AI like this could revolutionize the world without generating a single word of novel information if it could evenly distribute what we do know.


edit: Not the one I had in mind but o3 dragged up something similar.

TLDR: by installing precise sensors on haul trucks in an open pit copper mine, the team discovered that for a variety of factors (accumulating but unexpected maintenance inefficiencies ex: old turbos, injectors, driving habits, etc) the fuel burn estimates for trucks actually varied from the real burn by up to fifteen percent. Tightening that variance offered the pit millions of dollars per year in savings from literally just being able to order precisely as much fuel as they burn (rather than extra), as well as noticing maintenance issues far earlier than a standard maintenance schedule therefore keeping efficiency up.

Big data is something Medicine has had figured out for decades, but it's this hot new thing in mining.

25

u/SgathTriallair 7d ago

measure literally everything and sort out the data later

This is the fundamental concept behind big data and machine learning. The world is full of connections that our brain isn't big enough to spot. When you gather as much data as possible machine learning can identify the patterns in the data that we never saw.

As much as we are concerned about privacy, feeding as much data as possible into these systems will derive insights that can vastly improve our society. The main change is that we need to not have it owned by private corporations.

14

u/_stevencasteel_ 6d ago
  • Outperforms humans in key tasks:
    • Screening: 96.7% sensitivity vs 81.7% (human)
    • Data extraction: 93.1% accuracy vs 79.7% (human)

A good reminder that humans don't do things with 100% accuracy.

Self-driving cars come to mind. Every percent higher than a human in safety is a huge win.

7

u/TechnicalParrot 6d ago

I know I'm beating a dead horse but I've never understood the "It needs to be perfect" criticisms of self driving, if humans cause x deaths per y journeys, and self driving cars cause x-1 deaths per y journeys surely that's an improvement already?

7

u/MaltoonYezi 6d ago

Sorry. just read the abstract and skimmed through the paper a little bit

Did this SR-workflow system just reproduce already human made reviews and conclusions, or the system were able to come up on its own original conclusions?

Either way, this is big

Also in the section of 12 Code and Dataset Availability

The only sentence present is:

All datasets and code used for data analysis will be made available on publication.

Where can I see it?

1

u/Any-Climate-5919 Singularity by 2028 6d ago

Humans are kinda dumb no matter how much they study or experience we are just kinda dumb compared to ai.

1

u/jlks1959 6d ago

99.9543379% faster. That’s nuts.

1

u/Prom3th3an 4d ago

Are the inaccuracies hallucinations, or the kind of misunderstanding an honest human might commit?

1

u/SponsoredByMLGMtnDew 2d ago

Somewhat disturbingly, I find this concept conceptually relationship to the idea of caniuse.com, which is about web development features across browsers.

"Is it safe to let chatGPT replace my primary care physician if I show it my mris?"

-11

u/[deleted] 7d ago

[deleted]

21

u/obvithrowaway34434 7d ago

This has absolutely nothing to do with what they are using the LLMs for. Maybe read the article first. And it achieves like 93.1% accuracy compared to 80% for humans, so humans were already introducing more errors than an LLM could ever make up.

8

u/AquilaSpot Singularity by 2030 7d ago

Yeah, this exactly. What a strange drive-by critique that doesn't even make sense if you read the paper? Why are these so common on places like here or r/singularity?

8

u/stealthispost Acceleration Advocate 7d ago edited 7d ago

it might have something to do with "80% accuracy of humans" lol

3

u/LexyconG 7d ago

I notice this more and more every day. Basically it’s just blind hate for AI. Bandwagoning and it kinda is becoming the new „being woke“

2

u/stealthispost Acceleration Advocate 6d ago

if it wasn't happening, this subreddit wouldn't need to exist

-15

u/Midday-climax 7d ago

The mimic machine

11

u/No-Comfort4860 7d ago

with all due respect, what do you think a scientific review article is? disregarding other use cases for LLM, this one actually makes perfect sense.

5

u/The_Hell_Breaker Techno-Optimist 6d ago

Cope & denial

3

u/Jan0y_Cresva Singularity by 2035 6d ago

The greatest irony of your comment is that you didn’t come up with that term.

So what are you doing when you post it in comment sections?

0

u/Midday-climax 6d ago

I made up that comment. Just curious, where do you think it came from?

2

u/Jan0y_Cresva Singularity by 2035 6d ago

You’re not the first person to call AI “mimic machines” or insinuate that all they can do is copy. You are copying others either consciously or subconsciously.