Austin Z. Henley wrote an interesting blog on natural language and how the influx of models have a bigger potential beyond their interfaces:
ChatGPT has kicked off a frenzy. It is all anyone in the tech world is talking about it seems. Startups are popping up left and right. Big companies are rapidly releasing ChatGPT-like features integrated in their products.
People are anticipating that large language models are going to revolutionize the world.
And maybe they will.
But a chat bot won’t.
Expecting users to primarily interact with software in natural language is lazy.
It puts all the burden on the user to articulate good questions. What to ask, when to ask it, how to ask it, to make sense of the response, and then to repeat that many times.
But a user may not know what they don’t know.
I do think the current presentation of these models—a text box and sliders—isn’t ideal for people who don’t know what they want from a large language model. And given that the companies creating these models take inputs as part of their training, we could be getting a lot of people just wanting to have fun with it. Does that make a qualitative dataset for future models?
On the flip side, how much can you do behind the scenes to make it more useful and are you then pushing the use in a way that alienates or only benefits the companies and not the users? I don’t know the answer but what we have now isn’t as great as people make it out to be.