Network Propaganda, part 3

3219 words books,

Part 3 of the book Network Propaganda is titled “The Usual Suspects,” and its focus is on the propaganda culprits who received sustained public attention. Chapter 7 focuses on the alt-right; chapter 8 Russian information operations; and chapter 9 Facebook and Cambridge Analytica.

(Note: I wrote this post back in April but forgot to post it until now. Whoops.)

Political communciation, the authors argue, focuses on three mechanisms by which media influences politics: agenda setting (which questions are salient?), priming (what standards should we use to assess people or positions?), and framing (the context within which claims are made, and how that influences our understanding or attitudes). The right-wing media (successfully) framed immigration in terms of fear of Muslims, not Latin American immigration, which led to the travel ban in Trump’s first week.

As a reminder, the book is available for free online!

[Ch. 7] The Propaganda Pipeline: Hacking the Core from the Periphery

This brief chapter studies the “propaganda pipeline” that transmits outlandish narratives from the periphery of a media ecosystem to its core.

The chapter’s focus is a truly bizarre set of stories claiming that the Clinton campaign chair attended a “Spirit dinner,” where recipes “included breast milk and sperm,” and that said chair was a Wiccan practicing literal magic. It began from the WikiLeaks emails, then was amplified by a variety of sources as it became more and more imaginative. (By the end, Redditors were claiming that the term “pizza” was code for sex trafficking children.)

The pipeline was evident: this effort drew in Russians, alt-right reporters, WikiLeaks’ Facebook and Twitter accounts, and Reddit and 4chan users, before eventually making it to Sean Hannity on Fox News. But the amplification depended on WikiLeaks and then the major nodes in the right-wing media network, not dedicated alt-right actors.

Our observations lead us to believe that those efforts, while real and observable, were not determinative in the election or the first year of the Trump presidency. … The top media outlets and political elites, not Redditors or meme propagators from the alt-right, were primarily responsible for distributing these frames [of Islamophobia and Clinton’s emails] widely.

[Ch. 8] Are the Russians Coming?

Chapter 8 studies the claim that Russia mounted (dis)information operations in the United States. Its conclusion is that “evidence of Russian interference is strong, but that the evidence of its impact is scant.”

What we observe is that Russian efforts take advantage of existing fissures and pathways. … The insular, domestically produced network of sites and social media diffusion networks that traffic in politically motivated falsehoods, coupled with the persistent attacks on mainstream media and other evidence-based institutions and expertise, have made the right wing of the American media ecosystem more susceptible to penetration, less resilient, and less capable of self-correction. When Russian propaganda efforts are consistent with right-wing American framings and beliefs, these falsehoods are able to insert themselves, propagate, and gain credence in the right-wing media ecosystem. By contrast, similar efforts aimed to leverage left-wing biases have to overcome the basic checks provided by a media ecosystem inhabited by professional-norms-oriented media outlets.

Russian information campaigns did not start in 2016: one journalist first observed “Internet brigades” within Russian media in 2003. The now-infamous Internet Research Agency (IRA) was registered in 2013, and reported on by Russian investigative reporters who were critical of their government. And in March 2015, the European Council identified dangers of Russian information operations in Eastern Europe.

The chapter continues with a walk through common techniques. I’m heavily summarizing here, as this is some ten pages in the book with a variety of examples:

  • Hacking and doxxing: hacking into email accounts or computers, and obtaining then leaking compromising information, is a technique Russian propagandists have repeatedly used. In the US, the most famous example was the hack of the DNC emails.
  • Social media sockpuppets, bots, cyborgs, and ads: the core strategy was to “increase disaffection, distrust, and polarization,” often by boosting the white-identity vote or diverting the left to Jill Stein. Mueller found that the IRA infiltrated social media through false accounts, either fully automated (bots), humans pretending to be someone else (sockpuppets), or a combination (cyborgs). They used all major tech platforms, though the platforms said that this was a tiny fraction of overall activity.
  • White propaganda, grey propaganda, and useful idiots: white propaganda was from a known partisan outlet; grey propaganda was somewhere between that and outright lies and falsely described sources; useful idiots were actual local media who happened to pick up Russian-developed narratives.

The authors underscore the point that one would have to be willfully blind or complicit to deny that Russia had been attacking the American media. They warn, however:

But evidence of sustained effort is not the same as evidence of impact or prevalence. It would be profoundly counterproductive to embrace the narrative that we can no longer know what is true because of Russian bots, sockpuppets, or shady propaganda. Indeed, having us adopt that attitude would mark a remarkable success for that Russian effort: the success in denying democracy one of its core pillars—the capacity to have a public debate based on some sense of a shared reality and trust in institutions.

The authors continue by describing the effects of the efforts above:

  • Hacking emails: the direct goal was to split Sanders supporters from Clinton, but this largely failed. The media ecosystem on the left did not follow the same propaganda feedback loop that the right-wing media does, and most of the “emails” coverage was driven by Fox News independent of the DNC leak.
  • Infiltrating social media: one allegation from Mueller was that a Russian-controlled Twitter account injected the “voter fraud” term into the media ecosystem, on August 11 and November 2. Careful analysis reveals that the summer spike was due to, in quick succession, (1) a Reddit AMA by Trump himself, (2) three states striking down voter ID laws, and (3) a Trump rally. Likewise, the November incident was due to a viral Trump tweet. It is unlikely that this account had more than a small incremental effect on the use of the term.
  • Using social media to suppress and split Democrats: in short, it is again difficult to conclude that efforts to divert black voters to Jill Stein (or stop them from voting) were successful. Facebook would be able to resolve this question fairly conclusively if they wished, the authors add, suggesting that they make available datasets of (appropriately anonymized or aggregated) targeted advertising for researchers to study.
  • Twitter bots: finding Russian bots is hard, because the weakest link in any machine learning approach will be the training set. The authors write that even the state-of-the-art projects, while by definition the best at what they do, should not be the basis for reporting that some large percentage of Twitter activity are bots. And even then, the more important question is one of effect, and there’s still no evidence to suggest that bots helped amplify propaganda in any meaningful way.

Identifying attempts at Russian interference is indeed important, but one must not forget that in order for these efforts to be influential, they have to make their way through the American media ecosystem. The mainstream media is, by design, resilient to Russian propaganda; the right-wing media, however, does not see the Russian origin of a story as a reason to ignore it. “Willing embrace of divisive Russian propaganda, not innocent error because of Twitter and Facebook manipulation, is the core challenge.”

The closing remarks of this chapter are excellently put:

The most important implication is for American conservatives … There is mounting evidence that “the Fox News effect” has given the Republican Party a clear edge in the past several election cycles. There is, it seems, a clear short-term partisan advantage of going along with the style and focus of these right-wing media outlets. But competition among outlets seeking to attract conservative audiences has resulted in a feedback cycle, as sites vie to produce more outrage and anger and get ever more extreme in their framing. … This competitive dynamic among right-wing media increases the shrill, conspiracy-tainted tone and content of coverage and makes right-wing audiences ever more susceptible to manipulation.

The result is a United States that is vulnerable to disinformation campaigns, both foreign and domestic. That susceptibility does not come from Russia, though Russia clearly has been trying to exploit it. That susceptibility does not come from Facebook, though Facebook has clearly been a primary vector online. It comes from three decades of divergent media practices and consumption habits that have left a large number of Americans, overwhelmingly on the right of the political spectrum, vulnerable to disinformation and ready to believe the worst, as long as it lines up with their partisan identity. And that susceptibility should be, in the long term, unacceptable to conservatives every bit as much as it is to all other Americans despite its short-term electoral benefits.

[Ch. 9] Mammon’s Algorithm: Marketing, Manipulation, and Clickbait on Facebook

This chapter studies three related threats: Facebook microtargeting and dark ads, behavioral manipulation by Cambridge Analytica, and political clickbait factories. Like the other chapters in this section, it argues that the Cambridge Analytica and clickbait threats were overstated, but cautions against microtargeting as a novel threat to democracy.

The fundamental problem is that Facebook’s core business is to collect highly refined data about its users and convert that data into microtargeted manipulations (advertisements, newsfeed adjustments) aimed at getting its users to want, believe, or do things. Actors who want to get people to do things—usually to spend money, sometimes to vote or protest—value that service. … But even if you think that microtargeted behavioral marketing is fine for parting people with their money, the normative considerations are acutely different in the context of democratic elections. That same platform-based, microtargeted manipulation used on voters threatens to undermine the very possibility of a democratic polity.

Behavioral manipulation (or “advertising,” whatever makes it sound less controversial) is one thing; doing this to influence democratic behavior, however, is itself a threat to democracy. Ads on Facebook are free from many of the regulations that govern TV advertising, and they are not subject to as much public scrutiny by virtue of being targeted.

The authors document in depth how the Trump campaign created a powerful digital operation: by essentially going all-in on Facebook, and partnering with other data firms like Acxiom, they were able to experiment heavily and create precise, targeted ads. The approach was not new, but the scale at which the campaign did it and the ever-growing ability of Facebook to target effectively, were.

Techno-sociologist Dr. Zeynep Tufekci, in 2012, warned of this in a NYT opinion piece: Beware the Smart Campaign. Writing about Obama’s (also successful) large-scale digital operation, she said: “What I really worry about, though, is that these new methods are more effective in manipulating people.” And she’s not wrong; that’s precisely what has happened over the last eight years.

Cambridge Analytica was slightly different. Employing techniques from a research paper, “Private traits and attributes are predictable from digital records of human behavior”, the company collected data on 87 million Facebook users by having participants in a Mechanical Turk study download a Facebook app harvesting data on users and their friends (which was famously against Facebook’s TOS).

While Cambridge Analytica was eager to take credit for their pivotal role in the election, the reality is a little messier. A 2017 paper studying the effects of Facebook-informed advertising shows statistically significant, but not practically significant, effect sizes. Indeed, in Pennsylvania, if one correctly identified every voter’s personality (they can’t), every trait were as effective as the most manipulable (they’re not), they reached every potential voter (they didn’t), and got the effect sizes claimed by the paper (they didn’t), they would have shifted 3,000 votes. Trump beat Clinton by 44,000.

Just as with the Russians, it is likely that the effect of Cambridge Analytica on the election was overstated.

Clickbait fabricators are yet another concern. Facebook’s News Feed removed barriers to participating in news—figuring out how to “work the algorithm” was all that an aspiring media creator had to do to draw traffic to their site. It turns out that creating content that people would click on just meant triggering automatic, emotional responses. This isn’t a new idea: tabloids predate the internet. The difference is in the ease of reach.

Our own data support the proposition that the economic incentives and ease of reach that Facebook offered did, in fact, result in Facebook’s political content exhibiting more extreme partisanship than either Twitter or the open web. Political clickbait sites were most commonly found on the far edges of the political spectrum and were significantly more pronounced on the right.

“Most sites that were particularly dependent on Facebook for attention,” the authors write, acted similarly. “They engaged in little or no original reporting and freely borrowed from other sources, producing short posts or articles with provocative titles intended to drive social media traffic.”

Studying the websites which received disproportionate attention on Facebook compared to Twitter reveals almost entirely political clickbait fabricators: Addicting Info, Bipartisan Report, Conservative Tribune, Occupy Democrats, and more. A study on their effectiveness found that the effect size was, as before, small. These sites drew public attention as a possible suspect for information disorder and put pressure on Facebook to combat its “fake news problem,” but once again it’s unlikely that these had a substantial impact on the election.

Closing thoughts

This entire section, and particularly the chapter on Russia, is the basis for the authors’ central point: that long-term media dynamics, not the alt-right, nor Russia, nor Facebook, were responsible for the radicalization of the American right and the election of Donald Trump. Suggesting that such actors contributed to these events misses the point. And in the case of Russia, it’s exactly what they want: to feed generalized distrust.

I love this. I am so happy to have a better understanding of (the lack of) Russian influence in the United States. It seemed suspect that they were able to manipulate entire social media platforms, as some claimed, and the explanation of them feeding off existing fractures is much more believable.

This is also an explanation that is kinder to American conservatives: suggesting that they had been manipulated by Russia, while some (the “useful idiots”) certainly have, is at least a little insulting. It’s also difficult to believe that their information operations are able to manipulate some 40% of Americans. That they’ve been influenced by a three-decade long radicalization and emergence of a separate media ecosystem places a lot less of the blame on individuals. That also makes sense—I don’t believe that people have gotten more gullible, but rather that the institutional forces manipulating us have gotten stronger.

The explanation also supports the theory of Republican radicalization, rather than “both sides” polarization. The right-wing media ecosystem, as the authors explained in earlier chapters, is more vulnerable to propaganda. Its structure encourages the spread of any content that fits the platform and party’s agenda, while the center / left ecosystem has self-correction mechanisms to avoid this.

On Cambridge Analytica: it’s not surprising to me that the effects of Cambridge Analytica were overstated. It’s the same story we are hearing about the other usual suspects—Russia, the alt-right, or Facebook itself. Additionally, previous research has showed how ineffective digital advertising can be, suggesting that their claims of “advanced psychographic targeting” are exaggerated.

I think that what the Cambridge Analytica scandal actually revealed, though, was the vast amounts of data that Facebook has on literal billions of people, and how permissive their data policies used to be. Entities from Tinder to the 2012 Obama campaign made use of their permissive data policies to collect all kinds of information, and I don’t think that was clear to a lot of people until the Cambridge Analytica fallout.

On the News Feed: Ben Thompson of Stratechery has made the point a couple of times (the other is more recently, in a paid article) that for direct-to-consumer companies like Dollar Shave Club or Casper, Facebook is better at finding them customers like anyone else. The same, I think, applies to both the clickbait fabricators of Chapter 9 and also smaller, legitimate news outlets.

That Facebook’s algorithm was able to be gamed by clickbait fabricators is, again, not surprising. There’s an entire industry around search engine optimization that started with manipulating (to varying success) the Google Search algorithm, and so of course the same exists for Facebook, YouTube, and other media platforms.

Creating emotional content to draw in clicks isn’t new, though. Tabloids operate on the same model, and long predate the internet. The difference here is in the ease of reach: Facebook’s audience of billions of users, and the associated revenue upside, became available for significantly less effort than ever before.

We see the same happening with BuzzFeed, which is just garden variety clickbait, rather than political clickbait specifically. Ben Thompson wrote about it in early 2019, too: relying on (what he calls) Aggregators to reach your audience is fundamentally unviable. I am sure that clickbait will always find a way to reach unsuspecting users, but

On political advertising and behavioral manipulation: I find the conclusions of Chapter 9 to be a little bit surprising, given the rest of the book. The authors make clear that they believe targeted behavioral manipulation is a serious threat to democracy despite no strong evidence that it has been a threat so far. They suggest that “the basic risk of undermining voter autonomy” and “the almost certain erosion of our collective confidence in the legitimacy of election outcomes” should convince us that narrowly targeted advertising should be constrained, if not banned outright.

This is obviously a hot topic: recently, Facebook famously announced that they would not regulate political advertising, Twitter banned it outright, Google restricted the degree of targeting allowed, and Spotify paused it while acknowledging that the problem was challenging. I’m in the camp of it needing to be highly regulated, and at least subject to the same scrutiny that TV ads are.

That the authors write so strongly about this threat is surprising, given that the core conclusion of Chapter 8 was “the Russian threat is overstated, and believing it is exactly what they want.” They say that they expand on this later in the book, and I look forward to reading that.

Further reading

This section linked to a variety of articles and academic papers that I look forward to reading. I haven’t added all of these to my Reading List yet, but I’m looking forward to these regardless: