In her book Atlas of AI, Kate Crawford took a greater look at AI and tried to demystify it as a wider concept. She discussed some of those points with Tom Simonite for WIRED:
WIRED: Few people understand all the technical details of artificial intelligence. You argue that some experts working on the technology misunderstand AI more deeply.
KATE CRAWFORD: It is presented as this ethereal and objective way of making decisions, something that we can plug into everything from teaching kids to deciding who gets bail. But the name is deceptive: AI is neither artificial nor intelligent.
AI is made from vast amounts of natural resources, fuel, and human labor. And it’s not intelligent in any kind of human intelligence way. It’s not able to discern things without extensive human training, and it has a completely different statistical logic for how meaning is made. Since the very beginning of AI back in 1956, we’ve made this terrible error, a sort of original sin of the field, to believe that minds are like computers and vice versa. We assume these things are an analog to human intelligence, and nothing could be further from the truth.
Crawford is a professor at the University of Southern California and researcher at Microsoft so she knows her shit. Her calls for more regulation of many AI applications echoes the sentiments of her peers, such as Margaret Mitchell and Timnit Gebru. For me, AI has amazing capabilities for a wide range of problems but AI cannot and should not be used everywhere, particularly in areas of high-level bias because, in order to train AI models, they need data. If an industry is highly biased, the data will be too, and we can’t assume the same humans in those sectors will be able to weed that out when many of them made it biased in the first place. As Crawford, Gebru, Mitchell et al have said countless times, we need better ethics before we roll out AI applications to every corner of the human experience.