Up until recently, I was using IBM Watson as the backbone of my natural language processing (NLP) tasks due to its ease of use, brilliant capabilities, and it being free for what I wanted from it. I’ve since scaled back and changed to a different API but Watson still remains a powerful tool for NLP. Back in the early 2010s, however, it was meant to be so much more as NYT explained:
A decade ago, IBM’s public confidence was unmistakable. Its Watson supercomputer had just trounced Ken Jennings, the best human “Jeopardy!” player ever, showcasing the power of artificial intelligence. This was only the beginning of a technological revolution about to sweep through society, the company pledged.
“Already,” IBM declared in an advertisement the day after the Watson victory, “we are exploring ways to apply Watson skills to the rich, varied language of health care, finance, law and academia.”
But inside the company, the star scientist behind Watson had a warning: Beware what you promise.
David Ferrucci, the scientist, explained that Watson was engineered to identify word patterns and predict correct answers for the trivia game. It was not an all-purpose answer box ready to take on the commercial world, he said. It might well fail a second-grade reading comprehension test.
His explanation got a polite hearing from business colleagues, but little more.
“It wasn’t the marketing message,” recalled Mr. Ferrucci, who left IBM the following year.
It was, however, a prescient message.
Between the media and wider tech industries, what’s changed with how AI technologies are treated, overinflated, and primed to crash and burn when they don’t deliver? Not a damn thing.
Filed under: IBM natural language processing