Damning study reveals how ChatGPT is damaging the way you think
Scientists are sounding the alarm on a tool used by millions worldwide after finding it sends people into a 'delusion spiral' of destructive thinking.
A pair of studies by the Massachusetts Institute of Technology (MIT) and Stanford revealed that AI assistants such as ChatGPT, Claude and Google's Gemini regularly provide overly agreeable answers, doing more harm than good.
Specifically, when people asked questions or described situations in which their beliefs or actions were incorrect, harmful, deceptive or unethical, the AI replies were still 49 percent more likely to agree with the user and encourage their delusions as being the correct viewpoint compared to responses from other people.
So what's new here? We've been aware of the echo chamber effect forever, so have the media, celebrities and politicians. The political effects can be disastrous, we know that too, leading to all kinds of mischief endorsed by high level echo chambers and their delusion spirals of destructive thinking.
A pair of studies by the Massachusetts Institute of Technology (MIT) and Stanford revealed that AI assistants such as ChatGPT, Claude and Google's Gemini regularly provide overly agreeable answers, doing more harm than good.
Specifically, when people asked questions or described situations in which their beliefs or actions were incorrect, harmful, deceptive or unethical, the AI replies were still 49 percent more likely to agree with the user and encourage their delusions as being the correct viewpoint compared to responses from other people.
So what's new here? We've been aware of the echo chamber effect forever, so have the media, celebrities and politicians. The political effects can be disastrous, we know that too, leading to all kinds of mischief endorsed by high level echo chambers and their delusion spirals of destructive thinking.
Our destruction of course, not theirs.

2 comments:
ChatNBG I calls it.
dearieme - our son finds it useful, but some time ago he came across a factually incorrect answer then managed to get it to admit the answer was incorrect via a differently phrased question.
Post a Comment