This is a summary of Celeste Kidd’s invited talk at NeurIPS 2019, How to Know, plus some thoughts at the end.

What everyone in ML should know about humans

Early on, Kidd discusses how (human) learners pick out what they want to learn. If I were to pick a book to read, one “predictable” possibility would be a children’s book about the alphabet, but this would be redundant with all of my existing knowledge. Another “surprising” possibility, which has no overlap with my existing knowledge, would be a book written in German about billiards or trade treaties; theoretically I’d be able to learn a new topic and new language, but it’s too new for me to even be able to get started. This, Kidd argues, highlights that ML algorithms might not want to seek out the most surprising data at the edge of the world, but rather the data at the edge of what can be learned. The point of all this was that “humans continuously form beliefs,” and that they are never done learning.

Another idea is that “certainty diminishes interest.” In other words, you’re not curious about things that you think you know. A rational agent should focus on areas of uncertainty, though it’s also the case that people are sometimes certain when they are incorrect. People are also curious based on what they think they know, and not what they actually know, which means people don’t have accurate mental models of their own uncertainty. Kidd also discusses stubornness here; if you think you know the right answer to something, and are presented with evidence to the contrary, you are more inclined to ignore that feedback.

She adds a couple more points, that “certainty is driven by feedback” and “less feedback may encourage overconfidence,” followed by “human form beliefs quickly.” In a study where people are exposed to search results about “activated charcoal” (where people are unlikely to have prior beliefs), they transition from undecided to believing that activated charcoal has health benefits with just a few minutes of watching pseudoscientific videos.

All of these, taken together, tell us “there’s no such thing as a neutral tech platform.” The algorithms that decide (at a minimum) the order in which information is presented make a huge difference in how people form beliefs. And when we optimize for engagement (however you define it, with clicks, upvotes, comments, etc.), we intentionally try to manipulate people’s behavior, and the mediating variable is people’s beliefs. Put otherwise, “the algorithms pushing content online have profound impacts on what we believe.”

It’s not a big deal if a teenager walks away from a search for activated charcoal thinking that black smoothies are super cool. But lives are in danger when a parent walks away from a Google search about whether or not they should vaccinate their children. Lives are at stake when people search for information online about whether or not climate change is real, and whether or not they should support drastic action. This is an ethical issue.

A different turn: “you have been misled”

The second part of Kidd’s talk takes a different direction: she starts sharing a personal anecdote about how a professor at Rochester abused his power and damaged the educations and professional opportunities of both her and other women. Kidd goes on to a “key point that I think most people don’t understand, but which continues to shape these debates. I want to talk about the climate for men.”

It may seem like a scary time to be a man in tech right now. I talk to my male colleagues and I know that there’s anxiety about how to talk about gender and how to interact with women post #MeToo. There’s a sense that a career could be destroyed over awkward passes or misunderstandings. People believe that men are being fired for subtle comments or minor jokes, or just plain kindness or cordial behavior. This perception reasonably makes people very nervous. WHat I want to say today to all of the men in the room is that you have been misled.

Kidd continues: no woman wants to file a complaint. No one wants to file a lawsuit, and few can afford to. The harassment and retaliation that women experience when filing these complaints is awful.

When you hear about a case [like a lawsuit, investigation, etc.], your priors should tell you that it’s very likely that some unusually bad behavior happened. Where people get misled is that often when misconduct becomes public, offenders almost universally apologize for a very minor infraction, while omitting the very many serious and more severe behaviors they should be remorseful for, but never are, that actually led to the lawsuit. They’re lying by omission.

“These apologies for minor things mislead people into believing that the accused person is being unfairly persecuted for that minor misstep,” she goes on, “and makes those reporting seem unreasonable or even unethical.” The apologies for minor things are smokescreens to make people think offenders are confused, and not abusive. And these make good people incorrectly believe that their careers could be ended by minor missteps.

She has three lessons for everyone:

  • When you hear about a woman having problems in your field or lab, she’s invariably faced much, much more than you are aware of.
  • When you hear men apologize for “misunderstandings,” this is completely predictable, and this is the universal response–no matter what actually happened.
  • Unless you are doing deeply wrong, you are incredibly safe. That you are not is a myth that those who have genuinely abused people want you to believe.

Thoughts

I love the idea of a neutral platform not existing. There’s no way to show a “neutral” news feed; even if we ignore engagement entirely and show everything from every Facebook friend in a most recent order, this favors content from people who post frequently and people who are active at the same time you are. (That’s ignoring the fact that it’s infeasible for anyone to see all the content from all of their friends.) Add some kind of engagement target to the mix and you’re already sunk.

Important related ideas are that there’s no such thing as an unbiased dataset (see my post on ImageNet), or that when choosing to do nothing you are still making a choice. Every decision made when building tech platforms matters, and when your scale is as big as Google, Facebook, or Twitter, these have important (and unstudied!) impacts on how people develop their beliefs. Understanding how platforms and algorithms affect people is a really interesting and challenging research direction.

And, of course, I admire Kidd for sharing her experience at such a public and male-dominated (80%! yikes) venue. She mentioned in an interview that she had never presented her research and her experiences with sexual harassment at the same time, but she wanted to use the privilege of giving a talk at NeurIPS, which gave her power and a platform to speak out. I hope that her talk reaches other people, and I love the core message “you have been misled.”