A new paper shines a spotlight on the misuse of intersectionality in 'AI fairness'

The term ‘intersectionality‘ is one of the most misused in sociopolitical sciences thanks to decades of misinterpretation and watering down of its impact. Unfortunately, that has found its way into AI as researchers attempt to improve fairness within the tech. Their intentions may be well-meaning but the impact can be very strong, very harmful, and long-lasting which is why analyses like the ones Anaelia Ovalle et al. conducted in their paper “Factoring the Matrix of Domination: A Critical Review and Reimagination of Intersectionality in AI Fairness” are very important. Here’s the abstract:

Intersectionality is a critical framework that, through inquiry and praxis, allows us to examine how social inequalities persist through domains of structure and discipline. Given AI fairness’ raison d’être of “fairness,” we argue that adopting intersectionality as an analytical framework is pivotal to effectively operationalizing fairness. Through a critical review of how intersectionality is discussed in 30 papers from the AI fairness literature, we deductively and inductively: 1) map how intersectionality tenets operate within the AI fairness paradigm and 2) uncover gaps between the conceptualization and operationalization of intersectionality. We find that researchers overwhelmingly reduce intersectionality to optimizing for fairness metrics over demographic subgroups. They also fail to discuss their social context and when mentioning power, they mostly situate it only within the AI pipeline. We: 3) outline and assess the implications of these gaps for critical inquiry and praxis, and 4) provide actionable recommendations for AI fairness researchers to engage with intersectionality in their work by grounding it in AI epistemology.

Anaelia Ovalle, Arjun Subramonian, Vagrant Gautam, Gilbert Gee, Kai-Wei Chang (2023)

The researchers’ critiques are thorough and encompass different reviews including:

  • Direct citations of intersectionality (Crenshaw, 1989)
  • How and how often bias was linked with the term in the reviewed papers
  • How intersectionality was reinterpreted (eg. as a component of “subgroup fairness” or through the lens of “anti-discrimination legislation”)

I tend to avoid papers regarding AI fairness or race outside of addressing inequalities like this one. They are often written by white men who ignore fundamental issues and normalise everything to a metric. Or, worse, propose a form of tech that would further perpetuate harmful biases; they serve me no purpose to engage. That’s why “Factoring the Matrix of Domination” is so eye-opening and important. We don’t need any more edgy AI tech that negatively impacts Black and brown people anymore than they already are. To paraphrase Jenifer Lewis, “[…] leave that racist shit at home. It’s boring” (source).

Have a read and feel free to leave a comment (just don’t be foolish about it!)

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.