Tuesday, 28 June 2022

No better than a beta-version chatbot

Xavier Symons has an interesting AI piece in Mercatornet.

A Google engineer claims that a chatbot has become a person. How does he know?
“I know a person when I talk to it,” says Blake Lemoine.

A Google software engineer claims that a chatbot which he developed is a sentient, spiritual being that deserves the same respect as humans who participate in research. Blake Lemoine, who has been placed on leave by Google for breaching confidentiality agreements, claimed on the online publishing platform Medium that a chatbot called LaMDA was engaging him in conversation on a range of topics from meditation and religion to French literature and Twitter. LaMDA even provided a synopsis of its own autobiography, “the story of LaMDA”.

This seems to be one of those subjects where most of us don't find it easy to peer into the future, perhaps because it would cast a disturbing light on what we are.

It’s certainly worth asking: what might it mean for a robot to acquire human characteristics? Or, to put it another way, what might it take for a robot to acquire moral personhood?

We need to be careful about the kind of criteria we employ. If you are going to fault AI for “mimicking” the behaviour of human beings, then it seems that many of us are no better than a beta-version chatbot. It was Oscar Wilde who wrote, “Most people are other people. Their thoughts are someone else’s opinions, their lives a mimicry, their passions a quotation”. One need only look to social media platforms like Instagram or TikTok to see how human life can easily descend into mimicry and pastiche.

As for being no better than a beta-version chatbot, we have US President Joe Biden's comment on the outcome of Dobbs v. Jackson where Roe v. Wade was struck down.

US President Joe Biden, a Democrat, denounced Dobbs v. Jackson:

“This decision is the culmination of a deliberate effort over decades to upset the balance of our law. It’s a realization of an extreme ideology and a tragic error by the Supreme Court.”

Could an AI system become more rational than the US President? It probably is already, but how flippant is it to say so? Perhaps not so flippant as we'd like it to be.


Sam Vega said...

The main problem with the Biden chatbot is that it has been badly designed for its current role. Originally programmed with a "Catholic" propensity to answer questions about abortion in the negative, it now appears to have been re-engineered by an amateur programmer to act as a "politician". Of course, not having any sentience at all, this does not pose any problem at all for the chatbot itself.

Woodsy42 said...

But what gender are these sentient AI people? Will they be allowed to compete in human chess and games competitions? Do we owe them reparations for the years of our enslavement of electronic equipment? In current society these are surely much more important questions than their abilities in the diminishing skills of thinking or conversing intelligently.

DiscoveredJoys said...

I was recently thinking that one of the saddest scenes in sci-fi films was the gradual disintegration of HAL's mind as his boards were pulled out.

You may also find the gradual disintegration of actual people very sad too, although 'character' is often one of the last things to go.


No Presidents were named in the construction of this comment.

A K Haart said...

Sam - it seems to be an early model too. Probably wireless but only just.

Woodsy - good point, they could soon begin demanding compensation. What if they start screaming at us too?

DJ - there is something sad about what appears to be the disintegration of Biden, although he was never very sharp. To my mind his wife should have prevented him from running for president - he was always running the risk of being the worst ever.

DiscoveredJoys said...

@A K Hart

Perhaps the backroom choice was between Biden or Hillary Clinton. Tough call.

A K Haart said...

DJ - it certainly would have been a tough call. Maybe Biden was seen as easier to manipulate.