Aggressively Defending My Clients Since 1990

Because justice is an abstract ideal, the legal system cannot be reduced to a set of secret algorithms.

On Behalf of | Apr 29, 2022 | Firm News

Law is not mathematical equation.  That’s because justice is an abstract ideal, much like infinity, that can be talked about, but never fully understood.  As the great Bertrand Russell said in Chap.7 of Unpopular Essays entitled, “An Outline of Intellectual Rubbish” (1950), https://www.academia.edu/38060105/Bertrand_Russell_Unpopular_essays:

“The most savage controversies are those about matters as to which there is no good evidence either way.”

Russell uses this short piece to go on a rant about how stupid people can be especially when they get together in groups and tempers run high.

Currently, there is a savage controversy raging that the use of mathematical equations can be used to attain justice in cases.  From how much time someone should receive after a criminal conviction to whether a child should remain with the child’s parents, people are trying make decisions based on secret mathematical equations or algorithms.

To paraphase Mark Twain, there are three types of lies: there are lies, damn lies and algorithms.

COMPAS: sentencing by magic square

In State v. Loomis, 881 N.W.2d 749, 767 (2016) the Supreme Court of Wisconsin held that the use of a proprietary risk assessment tool—called COMPAS, the algorithm was originally developed to help parole boards assess recidivism risk—at sentencing, did not violate the defendant’s due process rights to be sentenced (a) individually and (b) using accurate information.  COMPAS’s author, Northpointe, Inc., refused to disclose its methodology to the defendant or even the court.  Northpointe, Inc. indicated that the algorithm is proprietary and to disclose it would leave the company vulnerable to competitors.

COMPAS’s output—a risk assessment score—was referenced by both the State and the trial court during sentencing in Loomis. Because the algorithm deemed the defendant to be at high risk of recidivism, the sentencing court denied him the possibility of parole and handed down a six year sentence.

Despite upholding COMPAS’s constitutionality, the Court placed numerous restrictions on its use. COMPAS cannot be used to determine whether an offender should be incarcerated, or to calculate the length of his or her sentence.  Its use had to be accompanied with an independent rationale for the sentence, and any Presentence Investigation Reports containing the score had to contain an elaborate, five-part warning about the algorithm’s limited utility.  The defendant appealed to the Supreme Court, which declined to hear the case.

COMPAS is biased against black defendants.An algorithm that accurately reflects our world also necessarily reflects our biases.  In other words, an algorithm is only as good as the data upon which it is based:

Simply put, decision-making algorithms work by taking the characteristics of an individual, like the age, income and zip code of a loan applicant, and reporting back a prediction of that person’s outcome—for example, the likelihood they will default on the loan—according to a set of rules. That prediction is then used to make a decision—in this case, to approve or deny the loan.

Algorithms often learn the rules for making predictions by first analyzing what’s known as “training data” to discover useful patterns and relationships between variables. The patterns or algorithmic insights gleaned from the training data become the basis for rules governing future decisions and predictions.

However, if the training data is biased then the algorithm can pick up on that pattern of discrimination and replicate it in future decisions. For example, a bank’s historical lending data may show that it routinely and unfairly gave higher interest rates to residents in a majority Black ZIP code. A banking algorithm trained on that biased data could pick up on that pattern of discrimination and learn to charge residents in that ZIP code more for their loans, even if they don’t know the race of the applicant.  ALGORITHMIC BIAS EXPLAINED: HOW AUTOMATED DECISION-MAKING BECOMES AUTOMATED DISCRIMINATION (The Greenlining Institute FEBRUARY 18, 2021), https://greenlining.org/wp-content/uploads/2021/04/Greenlining-Institute-Algorithmic-Bias-Explained-Report-Feb-2021.pdf

COMPAS has been found to have an algorithmic bias.  Bias is defined as outcomes which are systematically less favorable to individuals within a particular group and where there is no relevant difference between groups that justifies such harms.  Nicol Turner Lee, Paul Resnick, and Genie Barton, Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms, (Brookings Institution May 22, 2019) https://www.brookings.edu/research/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/COMPAS is skewed towards labeling black defendants as high risk and white defendants as low risk in violation of the equal protection clause.  Julia Angwin et al., Machine Bias: There’s Software Used across the Country to Predict Future Criminals. And It’s Biased against Blacks, PROPUBLICA (May 23, 2016), https://www.propublica.org/article/machine-biasrisk-assessments-in-criminal-sentencing  This is a flaw which cannot go unnoticed by this Court.  See, Farhan Rahman, COMPAS Case Study: Fairness of a Machine Learning Model COMPAS Case Study (Towards Data Science Sept. 7, 2020), https://towardsdatascience.com/compas-case-study-fairness-of-a-machine-learning-model-f0f804108751

Specifically, the COMPAS recidivism algorithm was biased in that:

  • Black defendants were often predicted to be at a higher risk of recidivism than they actually were. Our analysis found that black defendants who did not recidivate over a two-year period were nearly twice as likely to be misclassified as higher risk compared to their white counterparts (45 percent vs. 23 percent).
  • White defendants were often predicted to be less risky than they were. Our analysis found that white defendants who re-offended within the next two years were mistakenly labeled low risk almost twice as often as black re-offenders (48 percent vs. 28 percent).
  • The analysis also showed that even when controlling for prior crimes, future recidivism, age, and gender, black defendants were 45 percent more likely to be assigned higher risk scores than white defendants.
  • Black defendants were also twice as likely as white defendants to be misclassified as being a higher risk of violent recidivism. And white violent recidivists were 63 percent more likely to have been misclassified as a low risk of violent recidivism, compared with black violent recidivists.
  • The violent recidivism analysis also showed that even when controlling for prior crimes, future recidivism, age, and gender, black defendants were 77 percent more likely to be assigned higher risk scores than white defendants.  Jeff Larson, Surya Mattu, Lauren Kirchner and Julia Angwin,  How We Analyzed the COMPAS Recidivism Algorithm (Propublica May 23, 2016), https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm

Likewise, there is now a push to have child neglect investigations to be based on algorithms.  These algorithm in its first years of operation showed a pattern of flagging a disproportionate number of Black children for a “mandatory” neglect investigation, when compared with white children. S
ALLY HO and GARANCE BURKE, An algorithm that screens for child neglect raises concerns (AP News April 29, 2022), https://apnews.com/article/child-welfare-algorithm-investigation-9497ee937e0053ad4144a86c68241ef1. Independent researchers, who received data from the county, also found that social workers disagreed with the risk scores the algorithm produced about one-third of the time.  The algorithm is powered by data mostly collected about poor people an outsized role in deciding families’ fates.  Like COMPAS, the child neglect algorithm reinforces existing racial disparities in the child welfare system.