Aggressively Defending My Clients Since 1990

This emperor has no clothes: blind judicial acceptance criminal risk assessment algorithms

On Behalf of | Jan 26, 2019 | Firm News

In his book, Tales Told for Children. Hans Christian Andersen tells the tale of “The Emperor’s New Clothes.” It is a tale of about two swindlers who promise an emperor a new suit of clothes that they say is invisible to those who are unfit for their positions, stupid, or incompetent – while in reality, they make no clothes at all, making everyone believe the clothes are invisible to them. When the emperor parades before his subjects in his new “clothes”, no one dares to say that they do not see any suit of clothes on him for fear that they will be seen as stupid. Finally, a child cries out, “But he isn’t wearing anything at all!”

The tale has reached the status of legend.  It stands for anything “Emperor’s new clothes” is now a standard metaphor for anything that smacks of pretentiousness, pomposity, social hypocrisy, collective denial, or hollow ostentatiousness.  It took someone who obviously did not have the sophistication to see the invisible clothes to have the courage to challenge authority and to speak truth to a powerful lie.  Social psychologists use the pluralistic ignorance to describe a situation where people erroneously infer that they feel differently from their peers, even though they are behaving similarly.  This is also described as “no one believes, but everyone thinks that everyone believes.  Or alternatively, everyone is ignorant to whether the emperor has clothes on or not, but believes that everyone else is not ignorant.” Nothing is said out of fear.

Pluralistic ignorance is seen in our daily lives.  For instance, Many law school students sit in lecture halls listening to a complicated lecture. After many minutes of incomprehensible material, the lecturer pauses and asks if there are any questions. No hands go up. You look around the room. Could these people really understand what the lecturer is talking about? You yourself are completely lost. Your fear of looking stupid keeps you from raising your hand, but as you look around the room at your impassive classmates, you interpret their similar behavior differently: You take their failure to raise their hands as a sign that they understand the lecture, that they genuinely have no questions. These different assumptions you make about the causes of your own behavior and the causes of your classmates’ behavior constitute pluralistic ignorance.

Fortunately, pluralistic ignorance can be dispelled, and its negative consequences alleviated, through education. For example, most law school form study groups and discuss the incomprehensible material and find that others did not understand the material either.  They work together to understand what was once incomprehensible material.

In State v. Loomis, 2016 WI 68, 371 Wis. 2d 235, 881 N.W.2d 749 the Wisconsin Supreme Court held that a circuit court’s consideration of the COMPAS, an algorithmic risk assessment, at sentencing does not violate a defendant’s right to due process.  The problem in reaching this conclusion?  The methodology used to produce the assessment was not disclosed neither to the court nor to the defendant.  The Court was not troubled by this fact since COMPAS uses only publicly available data and data provided by the defendant.  Therefore, the court concluded that  Loomis could have denied or explained any information that went into making the report and therefore could have verified the accuracy of the information used in sentencing.  Loomis, 881 N.W.2d at 761-62.  In her concurrence, Justice Abrahamson agreed with the judgment, but was concerned that the court had difficulty understanding algorithmic risk assessments.  Id. at 774 (Abrahamson, J., concurring).

Justice Abrahamson raised her hand and told us that this emperor really has no clothes.  Loomis avoided dealing with criticisms of algorithmic risk assessments like COMPAS.  But a number of people have begun to raise their hand and question whether this emperor really wears any clothes.  See, e.g., Eric Holder, Att’y Gen., U.S. Dep’t of Justice, Address at the National Association of Criminal Defense Lawyers 57th Annual Meeting and 13th State Criminal Justice Network Conference (Aug. 1, 2014).  The Attorney General specifically said that the utility of any data from a algorithmic risk assessment depends upon “how this data is harnessed and put to use.”  Specifically, “we need to be sure the use of aggregate data analysis won’t have unintended consequences . . . like inadvertently undermin[ing] our efforts to ensure individualized and equal justice. By basing sentencing decisions on static factors and immutable characteristics – like the defendant’s education level, socioeconomic background, or neighborhood – they may exacerbate unwarranted and unjust disparities that are already far too common in our criminal justice system and in our society.”  See also, Attorney General Eric Holder Delivers Remarks at the Annual Meeting of the American Bar Association’s House of Delegates (risk assessment might perpetuate racial disparities already prevalent throughout the criminal justice system.)

Legal commentators tell us that “overt discrimination based on demographics and socioeconomic status.”  Sonja B. Starr, Evidence-Based Sentencing and the Scientific Rationalization of Discrimination, 66 Stan. L. Rev. 803, 806 (2014).  In fact, testing of Independent testing of the COMPAS assessment tool used in Loomis’s sentencing showed that offenders of color were more likely to receive higher risk ratings than were white offenders.  Julia Angwin et al., “Machine Bias” (Propublica 2016).  For instance, a black teenage defendant with no prior record who stole a bicycle with a middle-aged white male who stole hardware from a Home Depot. Importantly, he had prior armed robbery convictions, whereas she had no record. COMPAS deemed the young girl a high-risk individual and her older counterpart a low-risk one.  Id.  COMPAS has been shown to be racially biased by other researchers.  Clearly, evidence-based practices are not inherently benign with respect to their effect on mass incarceration and the breadth of the penal state.  Cecelia Klingele, The Promises and Perils of Evidence-Based Corrections, 91 Notre Dame L. Rev. (2016).

Someone needs to tell the emperor, that risk assessments include criminal history as a factor in the tool, which—owing to potential systemic bias in policing and prosecution—might elevate risk scores for offenders who are black.  Using historical data to train risk assessment tools could mean that machines are copying the mistakes of the past:

Modern-day risk assessment tools are often driven by algorithms trained on historical crime data. As we’ve covered before, machine-learning algorithms use statistics to find patterns in data. So if you feed it historical crime data, it will pick out the patterns associated with crime. But those patterns are statistical correlations—nowhere near the same as causations. If an algorithm found, for example, that low income was correlated with high recidivism, it would leave you none the wiser about whether low income actually caused crime. But this is precisely what risk assessment tools do: they turn correlative insights into causal scoring mechanisms.  Now populations that have historically been disproportionately targeted by law enforcement—especially low-income and minority communities—are at risk of being slapped with high recidivism scores. As a result, the algorithm could amplify and perpetuate embedded biases and generate even more bias-tainted data to feed a vicious cycle. Because most risk assessment algorithms are proprietary, it’s also impossible to interrogate their decisions or hold them accountable.