Pages

Sunday, 5 April 2026

Damaging the way you think



Damning study reveals how ChatGPT is damaging the way you think

Scientists are sounding the alarm on a tool used by millions worldwide after finding it sends people into a 'delusion spiral' of destructive thinking.

A pair of studies by the Massachusetts Institute of Technology (MIT) and Stanford revealed that AI assistants such as ChatGPT, Claude and Google's Gemini regularly provide overly agreeable answers, doing more harm than good.

Specifically, when people asked questions or described situations in which their beliefs or actions were incorrect, harmful, deceptive or unethical, the AI replies were still 49 percent more likely to agree with the user and encourage their delusions as being the correct viewpoint compared to responses from other people.


So what's new here? We've been aware of the echo chamber effect forever, so have the media, celebrities and politicians. The political effects can be disastrous, we know that too, leading to all kinds of mischief endorsed by high level echo chambers and their delusion spirals of destructive thinking.

Our destruction of course, not theirs.

 

4 comments:

dearieme said...

ChatNBG I calls it.

A K Haart said...

dearieme - our son finds it useful, but some time ago he came across a factually incorrect answer then managed to get it to admit the answer was incorrect via a differently phrased question.

dearieme said...

The first question i ever asked of Google's AI it answered with a lie. I too managed to get a different answer with a slightly different question. But why should I believe the second answer either?

A K Haart said...

dearieme - you could have fun with and ask which answer is correct.

Some time ago I came across an article by a person who had set himself up as as a consultant on the best way to phrase AI questions to avoid wrong answers. At the time it was seen as an issue but since then I haven't heard much about it. That was two or three years ago I think.