Skip to main content
Markkula Center for Applied Ethics

Responding to Extremist Content Online

Responding to Extremist Content Online

Responding to Extremist Content Online

Should Silicon Valley help disrupt ISIS?

Irina Raicu

Last Thursday morning, I was at Stanford listening to a panel discussion on how best to address extremist content online. The event’s organizers framed the question:  “How should we approach extremist content to best promote free expression, privacy, diversity, personal and national security?” As we were listening to that presentation, in Southern California 14 people were killed and 21 wounded in the San Bernardino massacre. News outlets are now reporting that at least one of the killers appears to have been “self-radicalized.” Through which channels? That’s not clear yet. But the married killers had apparently met online.

The Stanford panel discussion and the Q&A that followed that day were not, of course, focused on the San Bernardino attack. In any case, only a few suggestions were offered on how individuals, community groups, companies (including social media ones), or governments might respond to extremist content online. One participant argued that we should promote counter-narratives to answer the extremist ones—that the response to ideas and speech should come in the form of alternative ideas and speech. But who would be the credible/effective providers of such content, who would finance it, and how such content would surface among all the rest remained open questions. Another panelist argued that taking down content and shutting down extremist websites only drives the posters and readers deeper into the “deep Web,” where they are harder to track. There was also significant concern among the panelists that the definition of “extremist” content was hard to pin down and that a vague definition could be used to stifle freedom of speech; that efforts to take down extremist content might themselves backfire or consume resources ineffectively; and that the implementation of governmental programs trying to address extremist violence has in practice been both ineffective and discriminatory (by, for example, unfairly casting suspicion on all Muslims, while failing to investigate right-wing home-grown extremism in the U.S.).

Several European countries have already adopted measures intended to respond to the presence of extremist content online—including the generation of counter-narratives. Some civil libertarians are now responding to the U.K. efforts, for example, and taking issue with what they describe as “pressuring platforms to censor content.” Scott Craig and Emma Lanslo, of The Center for Democracy and Technology, argue that “[t]he limits of free speech in the UK must be clearly articulated in law and reviewable by independent courts, not created ad-hoc by private companies under pressure from the government.” They add that “[s]ilencing radical views will not lessen their appeal – and may even increase it.  Stifling dissent is no way to challenge extreme ideologies; it only pushes conflict out of sight, and deprives the broader populace of an awareness of the views and tensions that exist within their communities.”

At the same time, social media platforms themselves don’t want to serve as conduits for extremist content, and they are also coming under increasing pressure from non-governmental sources to do more to eliminate it. A class action lawsuit filed on behalf of tens of thousands of Israelis, for example, claims that Facebook has not done enough to take down incitements to violence during the most recent violent episodes of terrorist attacks in Israel.

Some have called for the application of technological measures to control the dissemination of violent extremist content online—such as repurposing the tools currently targeted at sexual exploitation. However, in a recent policy brief titled “Violent Extremism: The New Online Safety Discussion,” Emma Morris—the International Policy Manager of the Family Online Safety Institute—argues that “[t]he technology that has been developed to combat online child sexual exploitation … should not be repurposed for extremist material. Online CSE is almost universally illegal, whereas much of the material in the extremist space is borderline illegal.” In the same brief, she points out that “[i]dentifying extremist words, pictures and videos is much easier said than done. What seemed to be an entirely innocuous Twitter feed to one person may constitute propaganda and radicalization to someone else,” and she adds that it is “vital that CSE technology remains respected and used with industry, and applying it to extremist material may bring that into question.”

Morris calls on parents to discuss broader issues with their children:

While the media can sensationalize the potential of harm, it is generally agreed that the risks of children becoming radicalized online remains extremely low. However, the possibility of children being exposed to upsetting or confusing content remains a real possibility. Instead of stopping them using the Internet, parents should encourage children to ask questions, develop critical thinking and report inappropriate or harmful content from a young age.

That begs the question, however: when children do report such content to their parents, what should the parents do next? Report it to the companies on which such content is found? Report it to law enforcement? And, in turn, what should the companies or the agencies alerted do in response?

In the wake of the San Bernardino shooting, presidential candidate Hilary Clinton has said that “We need to put the great [Silicon Valley] disrupters at work at disrupting ISIS.” In an article quoting that call, New York Times reporter David Sanger noted, however, that Clinton’s “critique of American technology companies was impassioned but vague on the specifics of what she was asking them to do. “ President Obama and others have issued similar vague calls for more action. Whether through new technology, broader Terms of Service statements, or more internal resources dedicated to taking down “inappropriate” content, Internet companies are being asked to do more to combat extremism, even as various stakeholders are debating whether such efforts would be more harmful than beneficial.

Yet doing nothing, or researching solutions more deeply (for how long?) before doing anything, seems like a failure, too—like an abdication of responsibility. We need more discussions like the one held at Stanford, and we do need to be realistic about the limited impact and the potential dangers of each of the measures that have been proposed so far in response to violent extremist content; however, acknowledging the fuzzy edges of the definition of “extremist” should not prevent the case-by-case evaluation and the swift removal of violent incitement.  If nothing else that signals, clearly, that speech that promotes violence, whose purpose is precisely to silence others, is not acceptable in our public fora.

We need the counter-narratives, as well, but a new report by George Washington University’s Program on Extremism makes it clear that the phenomenon of filter bubbles plays out in the efforts to counter extremist content, too. A recent article in The Verge discusses that report and notes what happens when users attempt to engage with ISIS supporters on various social media: “on Twitter particularly, Gilkes [one of the George Washington University researchers] says this calling out of ISIS members doesn't accomplish much. More often than not, the dissenting user will end up on one of ISIS's curated block lists, and the group's online community will continue to interact with only its own world view.”

Twitter’s curated block lists were initially devised as a way to counter online harassment… “Disrupting” extremist speech online, it turns out, is much easier said than done.

December 9, 2015

Photo by CLUC, used, without modification, under a Creative Commons license.

 

Dec 9, 2015
--

Subscribe to Our Blogs

* indicates required
Subscribe me to the following blogs: