WIRED on facial recognition in crime prevention (or should that be crime creation)

Michael Calore and Lauren Goode from WIRED discussed the use of facial recognition to identify criminal suspects and the ways they can fail in a recent Gadget Lab podcast episode. You can read the full transcript on the WIRED website but I thought I’d pull this long excerpt to illustrate my issue with facial recognition and the implementation of machine learning in general:

LG: […] The problem is that this technology, it doesn’t work as intended, right. And that has real-life terrible consequences for people. How does that really work? If this were to be regulated in some way, is it possible that it would not only be regulated at the level of how it’s deployed, but also there would be regulations around how it’s built? Because we know with AI and machine learning, there’s that phrase garbage in, garbage out. You give bad data, the output could be wrong too. Right? Talk about how that actually works technically, explain it for the people of our audience and how that should maybe change so that these things don’t happen.

KJ: Well, I should say that we don’t really know how accurate it is in practice. I think there’s a comparison to be made between the real-world performance of this technology and what we know in laboratory settings. We know that in laboratory settings, it has improved in its ability to identify people greatly within the last couple of years. It’s even apparently, for some reason, better at identifying people from the side, which I didn’t know we were doing.

LG: Oh, that’s interesting.

KJ: Yeah. That’s a thing. I learned that, but the difference between the real world’s use case and the deployment in a laboratory, I think is important to note. The other really important thing is even if it’s the best facial recognition algorithm, once you use a low-quality image, it can greatly reduce the accuracy of results. So even the best algorithm is going to be diminished by a poor photo used as the input. No matter how the artificial intelligence was trained, whatever data was used for it, that’s something that I think a lot of makers of this technology are still struggling with and haven’t been able to address. But so far as garbage in garbage out goes, every artificial intelligence model uses training data to make a “smart decision” or determination in the training data—if it is representative, it can accurately identify people, for example.

But if the training data doesn’t have people from different walks of life, then you are going to have biased results. This was demonstrated most notably by the Gender Shades project in 2018. It’s extremely important to note that we probably wouldn’t be having this conversation if it wasn’t for that particular work by two Black women, Timnit Gebru and Joy Buolamwini. And there was certainly an overrepresentation of white men that they were able to identify, and I believe studies have found similar results, the National Institute for Standards and Technology, which has also studied this in the Department of Commerce.

LG: So basically, if these systems are trained predominantly on images of white men, then that is going to create a higher accuracy rate for images of white men versus people of color, women, children, underrepresented groups in the training data sets.

KJ: That’s the idea. But I think there’s, as I was saying, the quality of a photo can impact it. The other thing is people’s faces change as they age. When the photo was taken can impact the results, so there’re different influencers that can contribute as well.

This stuff pisses me off to begin with but what riles me up even more is that engineers like Timnit Gebru have pointed out the ethical issues in AI tech before and gotten fired for their troubles. It’s almost as if the processes are working just fine for Certain Demographics in the name of a form of supremacy. I just can’t think of for whom and for what…

Face related: Discriminator: an interactive documentary about facial recognition algorithms trained on Flickr’s facial database

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.