Mathew Otieno has an interesting Mercatornet piece on cleaning up ChatGPT using outsourced workers in Kenya.
Is it wrong to pay Kenyans US$2 an hour to take out ChatGPT’s garbage?
It’s a job; it pays the bills. What’s the big deal, asks our Kenyan correspondent.
Ever since OpenAI’s ChatGPT chatbot burst out into the limelight late last year, its popularity has grown by leaps and bounds. By the end of January 2023, according to a report from UBS, a bank, ChatGPT had garnered over 100 million monthly active users, beating all social media sites as the fastest consumer internet service to achieve that distinction.
Unsurprisingly, in lockstep with its growing popularity, controversies have also started dogging the company. For instance, in mid-January, Time magazine published a bombshell report about how OpenAI sub-contracted Kenyan workers earning less than US$2 per hour to label toxic content, like violence, sexual abuse and hate speech, to be used to train ChatGPT to reduce its own toxicity. Some of them reported that they had been mentally scarred by descriptions of topic ranging from hate speech to violence to sexual perversion.
An interesting piece from a number of angles. One is the low cost outsourcing angle attacked by the usual suspects and another is the free speech angle. Some cleaning up is bound to be necessary, but how many of us would care to do it as a full time job? I wouldn't.
Which countries are moderating ChatGPT’s toxic content now? It’s not public information. We asked the bot itself and were told:
OpenAI has not publicly disclosed the locations of the individuals or companies providing human annotations for its training data. It is possible that some of the workers involved in annotating and improving the training data for models like GPT-3 are based in Kenya, but this information has not been confirmed by OpenAI.
Unfortunately even the term 'toxic content' is thoroughly political now. In this respect, true content can be toxic and false content non-toxic. ChatGPT has a long way to go, but it will be offered shortcuts by the usual suspects.
4 comments:
Paying Third-worlders to eliminate nasty stuff like hate speech and pornography is pretty easy. It can be done by anyone who can pass a simple test to recognise it, and who can work the software.
The real test is determining what counts as politically incorrect or inconvenient content. That's going to take a liberal arts educated American, and they'll have to pay more for that.
What’s ChatGPT when it’s at home?
If "social media" are any guide it's more likely to be Taking Out the Non-Garbage.
Sam - or maybe certified Guardian readers could do it more cheaply.
James - an AI system people have been playing with online.
dearieme - what we probably need is another version with all the material which failed their "fact checking".
Post a Comment