Would You Trust Artificial Intelligence to Determine Your Criminal Sentence?

New advances in artificial intelligence have recently spurred talk of their potential adoption in the criminal justice computer software scene. Currently, many courts use “black box” software to aid in criminal sentencing. Many critics of this system have voiced concerns about transparency, arguing that “black box” software whose inner workings cannot be identified or tracked leads to potential for mis-sentencing, despite their statistical potential to be more accurate than “white box” software. Recently, a relatively new concept known as machine learning has introduced more transparent and easier to interpret statistical models into this debate. The question that now remains is whether this algorithm, upon ingesting swathes of data, can produce more accurate predictions than can humans on criminal offenders’ recidivism rates.

COMPAS is one such new program that several teams of scientists recently tested, only to find that the computer program performed no differently, better or worse, than did trained human volunteers. COMPAS interprets each defendant’s sex, age, criminal record, and the criminal charge faced, as well as many other factors such as marital and family relationships, living situation, school and work performance, and substance abuse. The program then analyzes the profile information to calculate a score that classifies a defendant as low, medium, or high risk of recidivism.

One team of researchers ran 1,000 profiles of defendants from a random sample of people arrested in Broward County, Florida into the COMPAS system. COMPAS correctly predicted whether someone had been rearrested 65.2% of the time. Concurrently and separately, 400 human volunteers, presented with the same 1,000 profiles, predicted whether each defendant would be arrested for another crime within two years of her court sentencing. The volunteers made successful predictions 62.1% of the time.

In an additional treatment test administered only to the human participants, the scientists included defendants’ race in the profiles to measure whether racial bias influenced volunteers’ predictions. Volunteers successfully predicted recidivism 66.5% of the time, suggesting that racial bias did not have a significant influence on their prediction rates. Interestingly enough, a separate survey conducted by different researchers showed just the opposite when utilizing COMPAS to analyze the racial profiles. African Americans who did not commit further crimes were nearly two times more likely than Caucasians to be wrongly identified as “high risk” by COMPAS. Caucasians who became repeat offenders were incommensurately likely to be wrongly flagged as “low risk.”

Prediction errors likely stem from inaccurate or missing profile data, as well as false adherence to patterns in the data they were trained on, even when analyzing data that rightfully falls outside of that pattern. There remains a fairly significant potential for error in regard to the magnitude of the decision with which an algorithm is tasked to make in this situation–whether to convict someone and how long to keep them behind bars. Relying solely upon any sort of computing software for criminal sentencing seems unethical. Yet as more and more potential algorithms undergo testing, researchers are allowing their code and profile data to be public available and built with open-source software. As these algorithms become refined, their capacity for transparency and hopefully increased accuracy will likely spark their mass adoption nationwide in the coming years. With this and other significant, recent advances in tech, the human-technology mutual reliance remains exponentially increasing and undoubtedly here to stay.

The following two tabs change content below.

Mallory Matheson

Latest posts by Mallory Matheson (see all)

Leave a Reply

Your email address will not be published. Required fields are marked *