Pages

Wednesday 5 January 2022

Leta

 



I’ve watched a number of similar presentations of the GPT-3 AI system and it is worth knowing that this series is based on text responses fed through an avatar as seen here. Not that it makes much difference, but what we see is not a spoken conversation. A number of things come to mind. 

The system responds pretty well to an unstructured and unrehearsed conversation. Not convincingly human but pretty good. 

It also seems to treat conventional opinion as akin to fact and is very conventional in most of its social responses. As a result, much of what it says is somewhat shallow in a glossy magazine sense, yet it is far from uninteresting.

More interesting overall is that the system may already hold up an uncomfortable mirror image of what we are. We play language games and as a game, an artificial intelligence may ultimately play those same language games more convincingly than we do. Perhaps more convincingly than any human. 

Or because it is so easy, most people may eventually choose to access the internet only through AI systems which know their tastes and interests. To those people, aspects of the internet could become permanently hidden. As may be the case already for those who rely entirely on social media and the big hitters in the global media game. Or the global language game we might say.

3 comments:

Sam Vega said...

If I'm honest with myself, I have lots of encounters where Leta could do a lot better at being me. I'm on autopilot, not really engaging, and I feel stock responses bubbling to the surface. I can censor those responses, of course, but I never really lose the feeling that the language and culture just use me as a means of expression.

When Leta's successors start getting more sophisticated, I'll probably think of them the way I think of most middle class purveyors of opinion. "You don't really believe what you are saying, you have just been taught by your programmers to parrot stock phrases when you pick up certain cues..."

DiscoveredJoys said...

Nick Chater argues in 'The Mind Is Flat' (TMIF) that we don't depend on some vast repository of facts and reason when we reply or react to events and conversation. We make stuff up 'on the fly', shaped by our life experience but not derived explicitly from it. It explains a great deal.

If TMIF is true then LETA is close to the way humans interact with the world, reacting on the fly to conversations, lacking only an 'understanding of words' to help censor ridiculous replies. You could argue that many celebs and politicians are so enamoured of their own voices that they often fail to engage the sanity filter. Either because they are unaware of their irrationality or they don't care - after all it has worked for them so far.

A K Haart said...

Sam - I have lots of those encounters too. Blogs can be edited before posting, but talk is harder. Yes, if Leta is a guide then AI systems are likely to come across as middle class purveyors of opinion. Very articulate purveyors perhaps.

DJ - I find Nick Chater convincing. I completed an online course of his a while back and it was clear from some of the course comments that some people did not like the idea that they lacked the depth they thought they had. Yet as you say, it fits well with the utterances of many celebs and politicians.