[Paper] Critical Race Theory for HCI

1492 words papers,

I’ve had my eye on this paper since I saw that it received Best Paper at CHI 2020, but in light of recent events it seems more timely than ever. How does critical race theory fit into HCI? How do technological systems perpetuate racism? How can we do better?

Authors: Ihudiya Finda Ogbonnaya-Ogburu, Angela D.R. Smith, Alexandra To, Kentaro Toyama

Link: ACM Digital Library

Background

This paper studies the intersection of race and HCI, which has largely been considered a niche subfield (and other times outright ignored). From the abstract:

The human-computer interaction community has made some efforts toward racial diversity, but the outcomes remain meager. We introduce critical race theory and adapt it for HCI to lay a theoretical basis for race-conscious efforts, both in research and within our community.

This work attempts to bridge the gap between HCI and critical race theory. It first discusses some of the tenets of critical race theory, then adapts them to the HCI community. I’ll call out these adaptations here and discuss them in depth later (this is a direct quote, but the bullet points are mine):

We contribute the following HCI-focused adaptations:

  • racism is pervasive and ordinary in our society’s digital platforms and the larger socio-technical systems in which they are embedded;
  • interest convergence is at work even in the HCI community;
  • storytelling is an effective means of elevating stifled racial voices in HCI;
  • and, the technology sector’s color-blind tendencies—based on both liberalism and market capitalism—reinforce racist disparities.

Critical race theory

The field of critical race theory dates back to the 1970s in a response to the civil rights legislation of the 1960s. It draws “the role of power, history, culture, and ideology on social phenomena” from critical theory, while focusing on race in particular. I’ll now review some of the main ideas:

  • Racism is ordinary, not aberrational: it’s easy to think of racism as an infrequent event, but in fact it’s pervasive and structural.
  • Race and racism are socially constructed: that the division of people into racial categories, and societal behavior with respect to those categories, is artificial.
  • Identity is intersectional: people represent a unique set of overlapping identities; these create different contexts for different people’s experiences.
  • Those with power rarely concede it without interest convergence: racism benefits some people, and they will often try (intentionally or not) to perpetuate it.
  • Liberalism itself can hinder anti-racist progress: the typically-liberal ideas of colorblindness and equality are at odds with the need for race-conscious views.
  • There is a uniqueness to the voice of color, and storytelling is a means for it to be heard: sharing stories helps to challenge dominant narratives.

As part of reading this, I’m reflecting on where it sits relative to my current understanding of race and social justice. None of the concepts above are unfamiliar to me, thanks to a lot of education from different programs and experiences at Northwestern. As usual, though, it’s helpful to see these ideas formalized and placed within the broader framework of what I now know to be critical race theory.

The related work section broadly discusses existing work at the intersection of race and HCI. Unsurprisingly, there are few HCI papers engaging with race beyond a superficial level (just 17 in CHI proceedings, as of 2016).

Working with communities of color (e.g., in design groups) can require specific steps, “from gaining access and building rapport [with non-majority communities], demonstrating commitments, and overcoming institutional and personal barriers to design research to allow participants to fundamentally guide the research and approach.” Especially when in a position of power (as a researcher), engaging with minoritized communities requires special care.

Bias in technology is often thought of in the form of algorithmic bias, in which e.g., facial recognition systems perform worse on people with darker skin. Algorithmic bias can also arise from training data, which I’ve written about before. HCI is working on this, but it’s very much a work in progress: “race should not be relegated to a niche topic in HCI.”

Allied research consists of sub-fields like feminist HCI or queer HCI, but the “these should not be niche topics” idea applies here too. The idea of intersectionality (that different identities can overlap, interact, and conflict) has made its way into HCI as well.

Personal stories

The paper devotes a section to personal stories broadly related to the intersection of race and HCI. I will not attempt to summarize them here out of a desire to preserve their integrity, but I encourage everyone to read them online (here’s a non-ACM link). Now, more than ever, it’s important to listen to voices of color and learn from their stories.

Adapting critical race theory for HCI

The next section adapts ideas from critical race theory into an HCI context. What follows is a summary of one of the most thought-provoking things I’ve read recently.

Racism is ordinary in our sociotechnical world: this is, I think, the most important claim in the paper.

Just as critical race theory asserts, racism is an ordinary, everyday fact of life, and the contexts of neither HCI nor research are an exception. That this needs to be said at all, is itself one of racism’s most insidious tricks—that those who do not experience racism can pretend that it is an aberration, an occasional brokenness in what is otherwise a functioning world.

Racist artifacts are baked into the systems that we use every day. “In a world where racism is ordinary, racism in its technologies is also ordinary.”

Storytelling and voices of color in HCI: the authors discuss the value of storytelling, which HCI already generally understands, but doing so in race-conscious ways has room for improvement. “For a community that values “understanding people” and user-centeredness, underrepresented voices of color offer unique insight into the world that others may not see.”

Interest convergence & material reality in HCI: there’s a gap between aspirations (to be inclusive, to promote D&I efforts) and outcomes (minority representation in HCI leadership, academia, etc.). “Interest convergence” (which is when progress only occurs when it’s in the interest of the white majority) is in play.

The limits of tech liberalism: the “academic, technocratic, and capitalist aversion to race-conscious work inhibits the fight against racism,” shown through examples like editing away minority vernacular in favor of “proper” grammar. There are broader critiques related to how capitalism’s tendency towards exploitation preys upon minorities and fuels inequality, which is probably true but a lot to unpack.

Call to action

The paper ends with calls to action. When working with marginalized groups, acknowledge the additional burden of representation (racial and otherwise). Highlight voices of color when relevant (the authors mention papers about racism and social justice written by all-white authors).

Being other-conscious is necessary, too:

Next, when designing research or writing it up, researchers should strive to be other-conscious. Reflexivity—being self-aware and conscious of one’s own station and biases as they might affect research and writing—is something that most HCI researchers acknowledge as important [15, 109]. Its important converse is being other-conscious, especially as it has to do with race: to think through one’s own research and writing as it might be received by groups that are not one’s own.

There are others for the broader HCI community, which I omit because this is getting very long.

Reflections

This is one of the most interesting papers I’ve read in a long time, and I am really grateful to the authors for writing this (and to CHI for acknowledging it with the Best Paper award). This paper is obviously timely given recent events, but even without the George Floyd protests it stands in a broader context of increasing hate crimes, police brutality, and white supremacy.

I believe that the most important point of the paper is that “racism is ordinary in our sociotechnical world.” I’ve thought and written about how bias is pervasive in invisible algorithms, like Google search results, Facebook and Twitter’s timelines, Amazon’s product rankings, and more. More fundamentally, though, the search results, posts and tweets, and product listings themeselves, and not just the algorithms that curate them, are products of a system with deep-rooted racism.

We’ll sometimes hear about how cameras think Asian faces are blinking, or about how searching for Black teens gets you mugshots compared to the smiles of White teens, or more. But the social context in which these systems exist has racism baked into them. And that—regardless of anyone’s intention—is sufficient to perpetuate it.

Put otherwise, tech isn’t special. It’s subject to the same biases, prejudices, and, yes! racism that everything else is. Algorithms, despite being mathematical artifacts, are not objective; they exist in the context of a world with deeply rooted, systemic, ordinary racism.

I loved this paper. I think one of the most important questions this raises is how we can make “race in HCI” less of a subfield, and more of a foundation that everyone considers when designing race-conscious systems.