This week, I read two articles about “broader impact” statements, soon to be required by NeurIPS. Other articles include a discussion of Facebook’s suicide prevention AI, suggestions on fighting fake news, and poll results showing Fox News viewers are less likely to take the coronavirus threat seriously.

Suggestions for writing NeurIPS 2020 broader impact statements

Author: Brent Hecht

NeurIPS will now require authors to submit “broader impact” statements, “including possible societal consequences—both positive and negative” (from Getting Started with NeurIPS 2020). This is a suggestion that Hecht suggested two years ago as part of the Future of Computing Academy.

His advice is basically to start thinking about the broader impacts now, and draw upon the literature in CHI or FAccT (formerly FAT*) for ideas n how to do this. Thinking about the broader impact early will let you pivot the project, or perhaps choose a different one entirely. It might also be good to bring a social scientist onto the research team.

Hecht also suggests that the reviewer’s job is not to judge submissions for their impacts, but rather to “evaluate the rigor with which they disclose their impacts.” It’ll take some time to figure out how to do this well, he argues, and some iteration and dialogue about this is essential.

Thoughts: It is very cool to see that the leading AI conference is going to mandate that authors devote space in their papers to this; field-wide change comes from the top. I am sure that it will receive criticism for papers whose broader impact is not always apparent—e.g., for new layer design, a new form of normalization, or other more mathematical results—but, to me, that’s fine. That’s an opportunity to think about these issues in more depth and iterate for future years.

Additionally, Brent was one of my professors at Northwestern. I was fortunate to take two classes with him, including one of my top-3 favorite classes, Algorithms and Society Seminar. I am grateful to him for helping me to land in data science, for guiding my interest in AI ethics, and for generally helping me to shape how I think about the problems in this field. Seeing this post from him and his colleagues is great!

It’s time to do something: Mitigating the negative impacts of computing through a change to the peer review process

Authors: Brent Hecht, Lauren Wilcox, Jeffrey Bigham, Johannes Schoning, Ehsan Hoque, Jason Ernst, Yonatan Bisk, Luigi De Russis, Lana Yarosh, Bushra Anjum, Ddanish Contractor, Cathy Wu

Summary: this post offers suggestions on how to mitigate the increasingly visible negative impacts of computing research, saying that “we can no longer simply assume that our research will have a net positive impact on the world.” While computing advances have had numerous positive impacts, they have also helped contribute to the erosion of privacy, threats to democracy, and automation reducing employment. The negative impacts of this kind of work are increasingly high-profile and increasingly damaging.

The authors suggest the exact intervention that NeurIPS is implementing this year: to require that papers “rigorously consider all reasonable broader impacts, both positive and negative.” They emphasize “rigorously,” citing how frequently hand-waving is tolerated.

“What if a research project will do more harm than good?", the article poses. Well, to me, the answer is simple: don’t do it, or propose interventions to mitigate the negative impacts. And if someone repeatedly conducts anti-social research, then take this into account when considering this person’s status in the community (w.r.t. grant proposals, publications, and tenure).

The authors additionally highlight the important role of the tech press:

As such, we encourage the press to hold accountable all public communication regarding computing innovation in the same fashion as we suggest for peer reviewers above. This means asking researchers and the firms that represent them to enumerate the downsides of their innovations. It also means asking them to discuss what changes to their technologies and what new policy might mitigate these downsides.

I’ve noticed the MIT Technology Review do an excellent job at this, often writing exceptionally high-quality coverage of recent research in machine learning and general computer science. The authors note that “the tech press is way ahead of the computing research community” here, which is evident to anyone reading it frequently: one of the motivations for this proposal was how quickly the negative coverage of computing research is increasing.

I’m excited to see how this takes shape in the NeurIPS papers at the end of the year. More broadly, I’m happy that the (positive and negative, but especially negative) impacts of computing on society are starting to get more attention; it’s one of my research interests and one that I hope to keep reading about.

Ethics and Artificial Intelligence: Suicide Prevention on Facebook

Authors: Norberto Nuno Gomes de Andrade, Dave Pawson, Dan Muriello, Lizzy Donahue, Jennifer Guadagno

Summary: this is an open-access paper describing Facebooks’s approach to suicide prevention, published at the end of 2018. It’s written at a high level, from the perspective of “should Facebook even step into this space, and how do we do this,” rather than a technical paper about the methodological details. For this reason, I read it more to get background about how Facebook (at least publicly) thought about this problem.

Facebook, thankfully, found it important to collaborate with experts: “Our first step in addressing the complex and widespread problem of suicide was to learn and understand from those that have thoroughly studied it, from those with expertise working on preventative measures, and from those who have experienced attempting suicide.”

They discuss the ethical tensions in depth: the most interesting to me is whether or not they had an ethical imperative to do this. Are they going beyond what they “should” do as a company, or as a social network? When suicide prevention institutions started asking Facebook if they would help, citing their unique position as an asset, Facebook knew that they had to take a stand on this, one way or another. They decided that they did have such an imperative, and began building it out based on that.

I think this is the kind of article that highlights how really, truly hard Facebook’s job is. Some may argue that they obviously have a responsibility to do this, but others may argue that the privacy of people using their platform comes first. While this question of safety vs. privacy is well-studied (not to say that it has an answer), the decisions that Facebook makes on this front affect literal billions of people.

There are countless other social safety issues that Facebook has to make a similar decision on. The most talked about one is misinformation and “fake news,” though the jury is still out on how to do this well (see the NYT opinion article at the bottom of this page). At the same time, they’re working to fight child exploitation, terrorist plots, hate speech, fake accounts, election interference, and more.

Facebook, of course, has its problems. The opaque privacy policies and data sharing, following you around the web for advertising, and choice to not regulate political advertising are my biggest criticisms of it. I think people give it far too little credit for how hard these problems are, though; no company has ever had to deal with some of these issues at the scale that Facebook does before. That’s not to excuse it, but rather to give context for some of its rough edges.

Other, shorter articles

Building a more accurate time service at Facebook scale is an article from Facebook engineering talking about how they built a service (time.facebook.com) to sync time across many computers at sub-millisecond precision. The post starts by describing why leap seconds are awful, and the most popular approach for handling them is to “smear” the second across multiple hours. They go into great detail about their four-layer architecture and make me thankful that I have chosen a career in which I will never have to build a time service myself.


Opinion | The right way to fight fake news in the NYT, written by Dr. Gordon Pennycook and Dr. David Rand, writes that anti-misinformation strategies need to be empirically grounded, and that many strategies in use today are not:

  • The “information panels” on YouTube (when content was government-funded) and “context” on Facebook have virtually no impact on whether people believed headlines
  • Adding messages that content is disputed by fact checkers is also challenging, because the absence of this warning leads people to believe that the content was verified, when usually it means the content was unchecked
  • General anti-misinformation campaigns (“fake news is not your friend” billboards) can reduce confidence in all news, which is ironically the goal of disinformation campaigns in the first place

The point, the authors write, is that you need to test your ideas. These were great examples of intuitively reasonable ideas that end up failing in practice.


New poll finds that Fox News viewers think the coronavirus threat is exaggerated is an article from Vox about a new coronavirus-related poll (full press release here, which wasn’t linked in the article!). Of Republicans who watch Fox News, 60% are likely to say that the coronavirus threat is exaggerated, and just 5% say they’ve stayed home during the pandemic. Meanwhile, of Republicans who don’t, those numbers are closer to 40% and (the still dismally low!) 30%. This is more evidence of the departure of the Republican media from the mainstream media; in this case, though, it’s literally going to kill us.