Pages

Sunday, 6 July 2025

Critical details



AI chatbots oversimplify scientific studies and gloss over critical details — the newest models are especially guilty

Large language models (LLMs) are becoming less "intelligent" in each new version as they oversimplify and, in some cases, misrepresent important scientific and medical findings, a new study has found.

Scientists discovered that versions of ChatGPT, Llama and DeepSeek were five times more likely to oversimplify scientific findings than human experts in an analysis of 4,900 summaries of research papers.



Hmm - there is another critical detail worth adding, a different kind of detail but still critical. Governments, politicians, major bureaucracies, major charities, pundits, assorted grifters and the mainstream media also oversimplify and, in some cases, misrepresent important scientific and medical findings.

It is not easy to imagine AI systems making the situation worse than it is already.

Of course, the critical question here is one of power - the question of who controls the oversimplification and misrepresentation. 

3 comments:

The Jannie said...

As I've bored the assembled company before : too much artificial and not enough intelligence. Yes, it applies to politicos, too.

DiscoveredJoys said...

"Scientists discovered that versions of ChatGPT, Llama and DeepSeek were five times more likely to oversimplify scientific findings than human experts in an analysis of 4,900 summaries of research papers."

But were the human experts inclined to make science findings more 'sciency' in support of their continued employment?

If you retrospectively trained AI models on the 'evolving' ramifications of COVID would you get the same 'expert' opinions - or something else entirely? A great number of the contrary expert views were dismissed at first.

A K Haart said...

Jannie - AI can be useful though, especially as a way to summarise complex issues in ways which can be checked via other sources.

DJ - "But were the human experts inclined to make science findings more 'sciency' in support of their continued employment?"

That seems to be the issue, it's just what numerous human experts do and it attracts far too many who are willing to go that way.