Skip to main content
Markkula Center for Applied Ethics

The Age of Misinformation

Jonathan Zittrain

Jonathan Zittrain

Jonathan Zittrain

This article was originally published in The Atlantic on May 3, 2017.

There are two big problems with America’s news and information landscape: concentration of media, and new ways for the powerful to game it.

First, we increasingly turn to only a few aggregators like Facebook and Twitter to find out what’s going on the world, which makes their decisions about what to show us impossibly fraught. Those aggregators draw—opaquely while consistently—from largely undifferentiated sources to figure out what to show us. They are, they often remind regulators, only aggregators rather than content originators or editors.

Second, the opacity by which these platforms offer us news and set our information agendas means that we don’t have cues about whether what we see is representative of sentiment at large, or for that matter of anything, including expert consensus. But expert outsiders can still game the system to ensure disproportionate attention to the propaganda they want to inject into public discourse. Those users might employ bots, capable of numbers that swamp actual people, and of persistence that ensures their voices are heard above all others while still appearing to be humbly part of the real crowd.

What to do about it? We must realize that the market for vital information is not merely a market.

The ideals of the journalistic profession—no doubt flawed in practice, but nonetheless worthy—helped mitigate an earlier generation of concentration of media ownership. News divisions were by strong tradition independent of the commercial side of broadcasting and publishing, while cross-subsidized by other programming. And in the United States, they were largely independent of government, too, with exceptions flagrantly sticking out.

Facebook and Twitter for social media, and Google and Microsoft for search, must recognize a special responsibility for the parts of their services that host or inform public discourse. They should be upfront about how they promote some stories and de-emphasize others, instead of treating their ranking systems as trade secrets. We should hold them to their desire to be platforms rather than editors by insisting that they allow anyone to write and share algorithms for creating user feeds, so that they aren’t saddled with the impossible task of making a single perfect feed for everyone.

There should be a method for non-personally-identifying partial disclosure: my Twitter-mates could be assured, say, that I am, in fact, a person, and from what country I hail, even if I don’t choose to advertise my name. Bots can be allowed—but should be known for the mere silhouettes that they are.

And Facebook and Twitter should version-up the crude levers of user interaction that have created a parched, flattening, even infantilizing discourse. For example, why not have, in addition to “like,” a “Voltaire,” a button to indicate respect for a point—while disagreeing with it? Or one to indicate a desire to know if a shared item is in fact true, an invitation to librarians and others to offer more context as it becomes available, flagged later for the curious user?

Finally, it’s time for a reckoning with the bankrupt system of click-based advertising. By “bankrupt” I don’t mean that it’s bad for America or the world, though it is. Rather, by its own terms it is replete with fraud. The same bots that populate Twitter armies also inspire clicks that are meaningless—money out of the pockets of advertisers, with no human impact to show for it. There are thoughtful proposals to reseed a media landscape of genuine and diverse voices, and we would do well to experiment widely with them as the clickbait architecture collapses on its own accord.

While there is no baseline pure or neutral architecture for discourse, there are better and worse ones, and the one we have now is being exploited by those with the means and patience to game it. It’s time to reorient what we have with a focus on loyalty to users—honestly satisfying their curiosity and helping them find and engage with others in ways so that disagreement does not entail doxxing and threats, but rather reinforcement of the human aspiration to understand our world and our fellow strugglers within it.

Jonathan Zittrain is a professor at Harvard Law School and the Kennedy School of Government. He is also a professor of computer science at the Harvard School of Engineering and Applied Sciences, and the co-founder of the Berkman Klein Center for Internet & Society.

This article is part of The Democracy Project, a collaboration with The Atlantic.

Photo credit: Jon Chase/Harvard Staff Photographer

May 12, 2017
--