Skip to main content
Markkula Center for Applied Ethics

Sentenced by Algorithm

Julia Angwin

Julia Angwin

An upcoming talk by journalist Julia Angwin

Irina Raicu

Across the country, felons awaiting sentencing or serving time in many of America’s prisons will see their futures determined, at least in part, by algorithms (which are step-by-step instructions for processing certain data entered into computers). In the criminal justice system, algorithmic decision-making is used in the hope of reducing bias.  As it turns out, however, we don’t have evidence that it does that; in fact, we are beginning to see evidence that the algorithms themselves encode and perpetuate bias.

On September 22, journalist Julia Angwin will speak at Santa Clara University about algorithmic decision-making and algorithmic accountability. Part of the “IT, Ethics, and Law” lecture series (co-sponsored by the Markkula Center for Applied Ethics and the High Tech Law Institute), her talk will be free and open to the public.

You may well be familiar with Angwin’s work on the “What They Know” series published several years ago by the Wall Street Journal. For most readers, that in-depth analysis of the variety and extent of the tracking of internet users, and of the uses made of the personal data collected online, was eye-opening. In the ongoing debate about online privacy, that series has become a foundational document.

More recently, as a senior reporter for ProPublica, Angwin has been investigating and writing about the use of algorithmic decision-making tools in the criminal justice system. In May 2016, ProPublica published a report titled “Machine Bias”; its tagline reads, “There’s software used across the country to predict future criminals. And it’s biased against blacks.” In the report, co-authored with Jeff Larson, Surya Mattu, and Lauren Kirchner, Angwin writes about risk assessment tools that

are used to inform decisions about who can be set free at every stage of the criminal justice system, from assigning bond amounts… to even more fundamental decisions about defendants’ freedom. In Arizona, Colorado, Delaware, Kentucky, Louisiana, Oklahoma, Virginia, Washington and Wisconsin, the results of such assessments are given to judges during criminal sentencing.

Angwin details what she and her colleagues found when they evaluated the accuracy of the predictions made by a particular software used in Broward County, Florida (as well as in many other parts of the U.S.):

The score proved remarkably unreliable in forecasting violent crime: Only 20 percent of the people predicted to commit violent crimes actually went on to do so. …

We also turned up significant racial disparities…. In forecasting who would re-offend, the algorithm made mistakes with black and white defendants at roughly the same rate but in very different ways.

  • The formula was particularly likely to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants.
  • White defendants were mislabeled as low risk more often than black defendants.

The ProPublica analysis has been cited in numerous publications, as well as in an opinion of the Wisconsin Supreme Court. Algorithmic decision-making tools continue to be used across the country, however—with no evidence, apparently, that they reduce the amount of biased decisions in the criminal justice system. As reporter Rose Eveleth writes in Motherboard,

these algorithms are likely biased against people of color. But so are judges. So how do they compare? How does the bias present in humans stack up against the bias programmed into algorithms?

… [I]t’s essential to test whether these algorithmic recidivism scores exacerbate, reduce, or otherwise change existing bias.

Most of the stories I’ve read about these sentencing algorithms don’t mention any such studies. … As far as I can find, and according to everybody I’ve talked to in the field, nobody has done this work, or anything like it. These scores are being used by judges to help them sentence defendants and nobody knows whether the scores exacerbate existing racial bias or not.

The criminal justice system is only one context in which algorithmic decision-making is being implemented right now, with little or no transparency as to the criteria and formulas used to make determinations about individuals. As Angwin points out in a New York Times piece titled “Make Algorithms Accountable,” “Companies use [algorithms] to sort through stacks of résumés from job seekers,” and “[c]redit agencies use them to determine our credit scores.” At the same time, since the algorithms involved are proprietary, there is often no way for the people impacted to challenge the basis of those algorithmic determinations.

Most people don’t realize, yet, that this is happening, and the notion of “algorithmic accountability” is a developing concept. If you would like to find out more about it, register here and join us on the evening of September 22. Angwin’s talk begins at 7:30, in the Forbes Conference Room in Lucas Hall, on the Santa Clara University campus. If you will not be able to attend but would like to submit a question, please add it in the comments to this blog.

Photograph by Deborah Copaken Kogan

Sep 1, 2016
--

Subscribe to Our Blogs

* indicates required
Subscribe me to the following blogs: