Skip to main content
Markkula Center for Applied Ethics

How Bots and Humans Might Work to Stop Harassment

Craig Newmark

Craig Newmark

Craig Newmark

This article was originally published in The Atlantic on May 24, 2017.

There’re some really bad people who harass journalists. Women and minorities, especially, are the targets of extreme vitriol. Yet many newsrooms do little or nothing to attempt to protect their employees, or to think through how journalists and organizations should respond when harassment occurs.

Harassers and trolls have multiple motivations, often simple racism or misogyny, or in support of misinformation, or to suppress law enforcement or intelligence operations. Frequently, what appears to be multiple harassers are actually sock puppets, Twitter bots, or multiple accounts operated by single individuals.

Sustained harassment can do some serious psychological damage, and I speak from personal experience. Outright intimidation is a related problem, suppressing the delivery of trustworthy news—the kind of news reporting that is vital to democratic governance.

The usual solution is to ignore trolls and harassers, but they can be persistent, and they often game the system successfully. You can mute or block a harasser on Twitter or Facebook, but it's easy enough for them to create a new account in most systems.

If you're knowledgeable in Internet forensics, you can sometimes trace a harasser’s account, and “dox” them—that is, post personally identifiable information as a deterrent. However, that really needs to be done in a manner consistent with site terms and conditions, maybe working with their trust and safety team. (Seriously, this is a major ethical and legal issue.)

Or, if you have a thick skin, you can respond with “shock and awe,” that is, with a brutal response in turn. Or, you can reason with them, which has sometimes been known to work. Retaliation against professionals, however, often backfires. They’re usually well-funded, without conscience, and are often very smart.

One method to address rampant harassment would be for news organizations to work with their security departments to evaluate the worst abuse, and do risk assessment. Sometimes threats are only threats—but sometimes they’re serious. News organizations might share information regarding harassers, while respecting the rights of the accused and the terms and conditions of the organizations involved. There are also serious legal and ethical considerations here, to be considered.

Perhaps news orgs could enlist subscribers or other friends to bring harassment to light. Participants in such a system could simply tweet to the harasser an empty message, or with a designated hashtag, withdrawing approval while avoiding bringing attention to the actual harassment. The empty message might communicate a lot, in zero words.

I believe that the targets of harassment need help from platforms, and here’s the start of a way that could happen. I’m attempting to balance fairness with preventing harassers from gaming the system, so please consider this only a start.

Let’s use Twitter for this thought experiment, mostly because I understand it, and they’re genuinely trying to figure this out.

Suppose you’re a reporter who is a verified user, and you get a harassing tweet. You’d do a quote retweet to a specific account as a way to report the harassment. That specific account would be a bot which could begin to analyze the harassing tweet. The bot would enter the email and IP addresses of the tweet into a database.

Periodically, a process would run to see if there’s a pattern of harassment from that IP or email address, and if so, that account could be suspended and contacted.

While most journalists would find it easy to do such a retweet, perhaps this should be more open to all, which could involve a harassment report button or option in the menu on a particularly tweet. (There’s a button and other means within the Twitter UI to do some of this, and Twitter has signaled that more’s on the way.)

News orgs also need to step up to protect their own reporters.

They could enlist subscribers or other friends to bring harassment to light. Participants in such a system could simply send an automated tweet to the harasser that says “This account has been reported for harassment and is being monitored by the community.” This type of system publicly tells harassers “you are on notice” and the community is watching. Note that this might be easily gamed, unless from verified journalists or similar.

Since this is a significant job, social networks may want to test organizing a volunteer community—like the one Wikipedia has—to help monitor the reports and accounts. Social networks can take it a step further and have trained members of the community respond to some of the harassers (not the bots) to discuss why the tweets were reported for harassment. Teaching moments are important to address harassment. If the social media account user continues the harassment, they get permanently banned from the social network. Some online games have adapted a similar strategy and have had some success with this approach.

I realize these ideas are fairly half-baked; the devil’s in the details. I’m also omitting a lot of detail, since that deeply detailed info could help harassers game this or other systems. In any case, we need to start, somewhere. Harassment and intimidation of reporters is a real problem, with real consequences for democracy.

Craig Newmark is an Internet entrepreneur best known as the founder of Craigslist.

This article is part of The Democracy Project, a collaboration with The Atlantic.

Jun 13, 2017
--