In terms of technology, nothing has dominated my headspace more than AI. But it’s mostly been out of frustration and an inability to articulate my thoughts beyond repeating the same ideas over and over (kind of like an untrained generative AI). Maggie Appleton, however, knows more than me and wrote a brilliant essay called The Expanding Dark Forest and Generative AI which delves into the harmful aspects of AI hype and how difficult it might be to “prove you’re a human on a web flooded with generative AI content”:
Over the last six months, we’ve seen a flood of LLM copywriting and content-generation products come out: Jasper, Moonbeam, Copy.ai, and Anyword are just a few. They’re designed to pump out advertising copy, blog posts, emails, social media updates, and marketing pages. And they’re really good at it.Primarily because GPT-3 which powers many of these products was specifically trained on text from the web. It’s intimately familiar with the style of language we use online.
These models became competent copywriters much faster than people expected – too fast for us to fully process the implications. Many people had their come-to-Jesus moment a few weeks ago when OpenAI released ChatGPT, a slightly more capable version of GPT-3 with an accessible chat-bot style interface. They’re calling it GPT-3.5. It’s the same model with human reinforcement learning layered on top. The collective shock and awe reaction made clear how few people had been tracking the progress of these models.
This was actually written at the end of December so people will have seen significantly more progress and buzz around ChatGPT (and people are even asking about GPT-4!). While the essay didn’t make me feel more relaxed, it did validate some of my thoughts around how I want to use AI and how I think other people should, in a way that provides efficient workflows and production and doesn’t negate the humanity within our creation. Because honestly… why is everyone so afraid to write their own articles or hire someone to draw images?