Zephyrnet Logo

Bad Algorithms Didn’t Break Democracy

Date:

Over the past five decades, America’s war on drugs has been motivated and organized by the fantasy that the proliferation of substance abuse is fundamentally a supply problem. The remedy, accordingly, has been to restrict the production and distribution of narcotics: Smash the cartels, cauterize the trafficking routes, arrest the dealers. This approach has, predictably enough, devolved into a self-sustaining game of whack-a-mole.

Since 2016, the panic about misinformation online has been driven by a similar fantasy. The arguments predicated on this view have become familiar, almost boilerplate. One recent example was a November speech given by the comedian Sacha Baron Cohen.

“Today around the world, demagogues appeal to our worst instincts. Conspiracy theories once confined to the fringe are going mainstream,” said the actor, in a rare performance in character as himself. “It’s as if the Age of Reason—the era of evidential argument—is ending, and now knowledge is increasingly delegitimized and scientific consensus is dismissed. Democracy, which depends on shared truths, is in retreat, and autocracy, which depends on shared lies, is on the march.” As Baron Cohen put it, it’s “pretty clear” what’s behind these trends: “All this hate and violence is being facilitated by a handful of internet companies that amount to the greatest propaganda machine in history.”

February 2020. Subscribe to WIRED.

Photograph: Art Streiber

As with the war on drugs, the chief villains in this account are the vectors: the social media companies and their recommendation algorithms, which stoke the viral profusion of preposterous content. The people who originate the memes, like peasants who grow poppies or coca, aren’t painted as blameless, exactly, but their behavior is understood to reflect incentives that have been engineered by others. Facebook and Google and Twitter are the cartels.

And the users? They go about their online business—“not aware,” as technology investor and critic Roger McNamee puts it, “that platforms orchestrate all of this behavior upstream.” Tech’s critics offer various solutions: to break up the platforms entirely, to hold them liable for what users post, or to demand that they screen content for its truth-value.

It’s easy to understand why this narrative is so appealing. The big social media firms enjoy enormous power; their algorithms are inscrutable; they seem to lack a proper understanding of what undergirds the public sphere. Their responses to widespread, serious criticism can be grandiose and smarmy. “I understand the concerns that people have about how tech platforms have centralized power, but I actually believe the much bigger story is how much these platforms have decentralized power by putting it directly into people’s hands,” said Mark Zuckerberg, in an October speech at Georgetown University. “I’m here today because I believe we must continue to stand for free expression.”

If these corporations spoke openly about their own financial interest in contagious memes, they would at least seem honest; when they defend themselves in the language of free expression, they leave themselves open to the charge of bad faith.

But the reason these companies—Facebook in particular—talk about free speech is not simply to conceal their economic stake in the reproduction of misinformation; it’s also a polite way for them to suggest that the real culpability for what pullulates on their platforms lies with their users. Facebook has always presented itself, in contrast to legacy gatekeepers, as a neutral bit of infrastructure; people may post what they like and access what they fancy. When Zuckerberg talks about “free expression,” he is describing the sanctity of a market­place where supply is liberated to seek the level of demand. What he is saying, by implication, is that the affliction of partisan propaganda reflects not a problem of supply but of demand—a deep and transparent expression of popular desire.

Advertisement

This might be a maddening defense, but it is not a trivial argument to counter. Over the past few years, the idea that Facebook, YouTube, and Twitter somehow created the conditions of our rancor—and, by extension, the proposal that new regulations or algorithmic reforms might restore some arcadian era of “evidential argument”—has not stood up well to scrutiny. Immediately after the 2016 election, the phenomenon of “fake news” spread by Macedonian teenagers and Russia’s Internet Research Agency became shorthand for social media’s wholesale perversion of democracy; a year later, researchers at Harvard University’s Berkman Klein Center concluded that the circulation of abjectly fake news “seems to have played a relatively small role in the overall scheme of things.” A recent study by academics in Canada, France, and the US ind
icates that online media use actually decreases support for right-wing populism in the US. Another study examined some 330,000 recent YouTube videos, many associated with the far right, and found little evidence for the strong “algorithmic radicalization” theory, which holds YouTube’s recommendation engine responsible for the delivery of increasingly extreme content.

Regardless of how one study or another breaks, tech companies have reason to prefer abstract arguments about the values of untrammeled expression. They have chosen to adopt the language of classical liberalism precisely because it puts their liberal critics in an uncomfortable position: It’s unacceptably patronizing to claim that some subset of our neighbors have to be protected from their own demands. It’s even worse to question the authenticity of those demands in the first place—to suggest that the desires of our neighbors are not really their own. Critics must rely on such potted ideas as “astroturfing” to explain how it might be that good people come to demand bad things.

The case for corporate blame is, at any rate, probably more expedient than it is empirical. It’s much easier to imagine how we might exercise leverage over a handful of companies than it is to address the preferences of billions of users. It’s always tempting to search for our keys where the light is better. A better solution would require tech’s critics to take what people demand as seriously as the corporations do, even if that means looking into the dark.


The first step toward an honest reckoning with the reality of demand is to admit that political polarization long predates the rise of social media. By the time Facebook opened its walled orchard to everyone in 2006, the US had already spent 40 years sorting itself into two broad camps, as Ezra Klein points out in his new book Why We’re Polarized. At the beginning of the 1960s, the Democratic and Republican parties both contained self-described liberals and conservatives. Then the passage of civil rights legislation and Richard Nixon’s Southern strategy set in motion the coalescence of each party around a consensus set of “correct” views. Race was the original fault line, and has remained salient. But the constellations of other views often shifted and were increasingly secondary to the simpler matter of group affiliation.

Where many technology critics see the rise of social media, some 15 years ago, as a vast shift that ushered in the era of “filter bubbles” and tribal sorting, Klein describes it as less the original cause than an accelerant—especially insofar as it encouraged individuals to see all their beliefs and preferences, if only in brief but powerful moments of perceived threat, as potential expressions of a single underlying political identity. Facebook and Twitter allotted each user one persona, with a profile, a history, and a signaling apparatus of unprecedented reach. Users faced new and acute kinds of public pressure—to be coherent, for one thing—and could only look to other members of their communities for clues to what might viably constitute coherence.

Advertisement

Offline, too, people were being dragooned, subtly or otherwise, into increasingly cramped partisan identities. Klein draws on the work of the political scientist Lilliana Mason to describe how political polarization has resulted in the “stacking” of otherwise unrelated identities under the heading of political affiliation. Where we might once have expressed solidarity with one another along any number of axes that had no obvious political valence—as members of the same faith, residents of the same town, fans of the same music—more and more of these affiliations were, by the 2000s, tagged and subsumed under the two flagship “mega-identities” on offer in US politics.

Neither of these two sides could exist without the other: It’s very hard to give people a strong sense of “who we are” without defining “who we are not.” We might not like everything our side does, but we would rather be dead than identify with our opponents. The construction and policing of the all important boundary between camps has come to feel like one of the daily burdens of being alive in the age of social media.

And as for social media’s role, none of this was deliberate or inevitable, as Klein sees it: “Few realized, early on, that the way to win the war for attention was to harness the power of community to create identity,” he writes. “But the winners emerged quickly, often using techniques whose mechanisms they didn’t fully understand.”

Taken on its own, however, the insight that social media both promotes and relies on swells of belonging seems insufficient to explain its contribution to Manichaean polarization. Social media could have produced a rich world of autarchic, jostling affiliations—a lively bazaar of many camps—and it’s a standard trope of internet nostalgists to long for the time when online identities could be fragmented. An individual, in those antediluvian days, could comfortably contain a range of identities, each expressed in its proper context. The fact that it hasn’t turned out that way on social media—the fact that, as Klein notes, the platforms have encouraged a more totalizing alignment—is one reason why many critics suspect that the apparatus is rigged, that we aren’t being given what we want but rather what some malign force wants us to want. It is much easier, once more, to invoke the perennial bugbear of “the algorithm” than it is to consider the idea that social sorting itself might be our most enduring preference.


In a recent article in The New York Times, Annalee Newitz expressed the familiar notion that “social media is broken.” But, at least by one reading, it’s working precisely as intended. Facebook was founded—or at least funded—on a serious, if esoteric, theory of demand, one that accounts for the origin and cultivation of desire.

In July 2004, the investor and PayPal cofounder Peter Thiel helped organize a small conference at Stanford University to discuss current events with his former mentor, the French literary critic and self-styled anthropologist René Girard. Thiel proposed “a reexamination of the foundations of modern politics” in the wake of 9/11, and the symposium proceeded in a decidedly apoc
alyptic register. “Today,” Thiel wrote in the essay he contributed to the event, “mere self-­preservation forces all of us to look at the world anew, to think strange new thoughts, and thereby to awaken from that very long and profitable period of intellectual slumber and amnesia that is so misleadingly called the Enlightenment.” Thiel wrote that “the whole issue of human violence has been whitewashed away” by a political culture built on John Locke and the wishful concept of a social contract; he believed we had to turn to Girard for a more satisfying account of human irrationality and vengefulness.

Advertisement

As Girard had it, we are defined and constituted as a species by our reliance on imitation. But we are not mere first-order mimics: When we ape what someone else does, or covet what someone else has, we are in fact trying to want what they want. “Man is the creature who does not know what to desire, and he turns to others in order to make up his mind,” Girard wrote. “We desire what others desire because we imitate their desires.” Unable to commit to our own arbitrary wants, we seek to resemble other people—stronger, more decisive people. Once we identify a model we’d like to emulate, we train ourselves to make the objects of their desire our own.

The emotional signature of all this imitation—or mimesis—is not admiration but consuming envy. “In the process of ‘keeping up with the Joneses,’ ” Thiel writes, “mimesis pushes people into escalating rivalry.” We resent the people we emulate, both because we want the same things and because we know we’re reading from someone else’s script. As Girard would have it, the viability of any society depends on its ability to manage this acrimony, lest it regularly erupt into the violence of “all against all.”

Around the time of that 2004 symposium, Thiel was making a $500,000 investment in a small startup called The Facebook. He later attributed his decision to become its first outside investor to the influence of Girard.

“Social media proved to be more important than it looked, because it’s about our natures,” he told The New York Times on the occasion of Girard’s death in 2015. “Facebook first spread by word of mouth, and it’s about word of mouth, so it’s doubly mimetic.” As people like and follow and dilate on certain posts and profiles, the Facebook algorithm is trained to recognize the sort of people we aspire to be, and obliges us with suggested refinements. The platforms are not simply meeting demand, as Zuckerberg would have it, but they’re not really creating it either. They are, in a sense, refracting it. We are broken down into sets of discrete desires, and then grouped into cohorts along lines of statistical significance. The kinds of communities these platforms enable are ones that have simply been found, rather than ones that had to be forged.

As the critic Geoff Shullenberger has pointed out, Facebook’s cultivation of these communities—structured by constant and simple mimetic reinforcement—is only half of a story that gets considerably darker. Girard spent the later decades of his career elaborating how, in myth and ancient history, human societies purchased peace and stability by displacing the bad blood of mimetic rivalry into violence against a scapegoat. “The war of all against all culminates not in a social contract but in a war of all against one,” Thiel writes, “as the same mimetic forces gradually drive the combatants to gang up on one particular person.”

Ancient religions, Girard argued, advanced rituals and myths to contain this bloodthirsty process. And Christianity, a religion centered around the crucifixion of an innocent scapegoat, promised transcendence of the entire dynamic with the revelation of its cruelty. (Girard was a professed Christian, as is Thiel.)

The problem, as Thiel sees it, is that we now live in a disenchanted age: “The archaic rituals will no longer work for the modern world,” he wrote in 2004. The danger of escalating mimetic violence was, in his view, both obvious and neglected. His concern at the time was with global terrorism in the wake of September 11, but later it seems he also came to worry about resentment toward the investor class in an age of growing inequality. In a set of notes published online in 2012 by the coauthor of Thiel’s book Zero to One, Thiel identifies tech founders as natural scapegoats in the Girardian sense: “The 99% vs. the 1% is the modern articulation of this classic scapegoating mechanism.”

Advertisement

Thiel’s prescient investment in Facebook could be interpreted as a gesture of faith in the power of social media platforms (Shullenberger calls them “scapegoating machines”) to step in and replace real violence with a new symbolic surrogate. That is, social media could serve to focus and organize the chaos of our untamed desires and, at the same time, focus and organize the potential violence of our untamed animus. The opportunity to vent on social media, and occasionally to join an outraged online mob, might relieve us of our latent desire to hurt people in real life. It’s easy to dismiss a lot of very online rhetoric that equates social media disagreement with violence, but in a Girardian account the conflation might reflect an accurate perception of the symbolic stakes: On this view, our tendency to experience online hostility as “real” violence is an evolutionary step to be cheered. The reason this has never happened in human history is because we lacked a pervasive, no-cost signaling infrastructure. Now we have it.

Shullenberger makes a good case that Thiel might have intuited all this: that social media, with its paths of least resistance, could provide not only this kind of cheap symbolic sorting but an ultimately symmetrical version of it. What we end up with is not the 99 percent versus the 1 percent but a vast, virtual stalemate in a symbolicall
y bipolar universe. Affinities based on the clever algorithmic sorting of refracted desires are only weakly bound. In the absence of a grand, substantive vision for who “we” are, we draw our strength and certainty from the coherent depravity of “them.”

It’s easy to relate to this: While most of us are rarely wholly satisfied by the goodness and purity of our own team, with its heterodoxy and lack of discipline, we’re deeply satisfied by what we interpret as the uniform villainy of our opponents. Think, for example, of how confidently liberals include among the “bad guys” someone as silly as the Canadian academic and self-help guru Jordan Peterson alongside a neo-Nazi like Richard Spencer. We seek and prize intelligible solidarity in our enemies with much greater pleasure than we do in our own camp. As Shullenberger puts it in one of his essays about Thiel, “for someone overtly concerned about the threat posed by such forces to those in positions of power, a crucial advantage would seem to lie in the possibility of deflecting violence away from the prominent figures who are the most obvious potential targets of popular ressentiment, and into internecine conflict with other users.” The goal is an evenly apportioned virtual antagonism in the stable perpetuity of a very vivid game.

If this was really Thiel’s idea—that Facebook might detach the world of permanent symbolic conflict from the real world of actual politics—then it was, or has become, an entirely cynical one. On the basis of his public doubts about democracy, his reverence for the occult elitism of the philosopher Leo Strauss, and his relationship to Trump, it’s clear enough how he thinks reality ought to be administered: by people like him and Zuckerberg, while the rest of us are distracted by the online video­games of our lives. (According to The Wall Street Journal, Thiel still exerts “outsized influence” as a Facebook board member.) And in retrospect, the idea that social media might redirect our worst mimetic impulses isn’t only cynical but devastatingly wrong. It’s unclear how it could even begin to account for the very nonsymbolic violence that spilled off of Facebook and into the real worlds of Myanmar and Sri Lanka—and, depending on your perspective, the United States as well.

Advertisement

In the end, as it becomes increasingly untenable to blame the power of a few suppliers for the unfortunate demands of their users, it falls to tech’s critics to take the fact of demand—that people’s desires are real—even more seriously than the companies themselves do. Those desires require a form of redress that goes well beyond “the algorithm.” To worry about whether a particular statement is true or not, as public fact-checkers and media-literacy projects do, is to miss the point. It makes about as much sense as asking whether somebody’s tattoo is true. A thorough demand-side account would allow that it might in fact be tribalism all the way down: that we have our desires and priorities, and they have theirs, and both camps will look for the supply that meets their respective demands.

Just because you accept that preferences are rooted in group identity, however, doesn’t mean you have to believe that all preferences are equal, morally or otherwise. It just means our burden has little to do with limiting or moderating the supply of political messages or convincing those with false beliefs to replace them with true ones. Rather, the challenge is to persuade the other team to change its demands—to convince them that they’d be better off with different aspirations. This is not a technological project but a political one.


When you buy something using the retail links in our stories, we may earn a small affiliate commission. Read more about how this works.


Gideon Lewis-Kraus is a contributing editor at WIRED. He last wrote about the blockchain platform Tezos in issue 26.07.

This article appears in the February issue. Subscribe now.

Let us know what you think about this article. Submit a letter to the editor at mail@wired.com.


A Guide to Rebooting Politics

Read more: https://www.wired.com/story/polarization-politics-misinformation-social-media/

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?