Maybe there are some good use cases for ChatGPT after all...

I’ve talked a lot of smack about ChatGPT, the latest large language model from OpenAI that offers conversational outputs, compared to GPT-3. It can do some objectively interesting things but there’s still a ton of harmful things it can do which people are downplaying or otherwise excusing. That said, a Washington Post article published on 10th December entitled ‘Stumbling with their words, some people let AI do the talking‘ (non-paywalled link and Yahoo! News link) looked at the ways ChatGPT and other AI tools have helped people with dyslexia, such as Ben Whittle, a pool installer and landscaper from the UK:

Ben Whittle […] worried his dyslexia would mess up his emails to new clients. Then one of his clients had an idea: Why not let a chatbot do the talking?

The client, a tech consultant named Danny Richman, had been playing around with an artificial intelligence tool called GPT-3 that can instantly write convincing passages of text on any topic by command.

He hooked up the AI to Whittle’s email account. Now, when Whittle dashes off a message, the AI instantly reworks the grammar, deploys all the right niceties and transforms it into a response that is unfailingly professional and polite.

Whittle now uses the AI for every work message he sends, and he credits it with helping his company, Ashridge Pools, land its first major contract, worth roughly $260,000. He has excitedly shown off his futuristic new colleague to his wife, his mother and his friends — but not to his clients, because he is not sure how they will react.

“Me and computers don’t get on very well,” said Whittle, 31. “But this has given me exactly what I need.”

I have no problem with AIs that can actually help people without harming others or otherwise solve problems that humans are incapable of solving in a fast, accurate, and efficient way. But startups are appearing like perennial weeds offering AI tools as solutions to everything, including human problems that humans honestly can’t be bothered tackling. And the harms are all built in and untouched. Ask people about those and you’re met with excuses such as “the data obviously needs cleaning; that’s not the job of the tool or its practitioners” or “teething problems; wait 5 years and it’ll be fine” or my favourite “well, humans are biased”. Yeah, I know, that’s why we’re here.

ChatGPT does nothing for me. I barely want to talk to humans sometimes so why would I want to converse with a language model made with lots of human content off Reddit and the rest of the internet? If it works for you, I’m genuinely pleased and I hope it continues. But these positive use cases must not overshadow the inherent problems that maintain oppressive systems or we’re gonna be in worse trouble than we already are.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.