Breaking Open the "Black Box": How Risk Assessments Undermine Judges' Perceptions of Young People
Date:  08-27-2018

Report states 60% of the risk score is attributable to age
From Prison Policy Initiative:

Imagine that you’re a judge sentencing a 19-year-old woman who was roped into stealing a car with her friends one night. How does her age influence your decision? Do you grant her more leniency, understanding that her brain is not yet fully developed, and that peers have greater influence at her age? Or, given the strong link between youth and criminal behavior, do you err on the side of caution and sentence her to a longer sentence or closer supervision?

Now imagine that you’re given a risk assessment score for the young woman, and she is labelled “high risk.” You don’t know much about the scoring system except that it’s “evidence-based.” Does this new information change your decision?

For many judges, this dilemma is very real. Algorithmic risk assessments have been widely adopted by jurisdictions hoping to reduce discrimination in criminal justice decision-making, from pretrial release decisions to sentencing and parole. Many critics (including the Prison Policy Initiative) have voiced concerns about the use of these tools, which can actually worsen racial disparities and justify more incarceration. But in a new paper , law professors Megan T. Stevenson and Christopher Slobogin consider a different problem: how algorithms weigh some factors — in this case, youth — differently than judges do. They then discuss the ethical and legal implications of using risk scores produced by those algorithms to make decisions, especially when judges don’t fully understand them. Continue reading >>>