The new study confirms that humans predict repeat offenders about as well as algorithms when given immediate feedback on the accuracy of their predictions and when shown limited information about each criminal. But people are worse than computers when not given feedback or shown more detailed criminal profiles. (10)
People rivaled COMPAS’ performance: accurate about 65 percent of the time. Even without feedback, people achieved 62 percent accuracy. But in a slightly different version of the experiment, COMPAS had an edge over people who did not receive feedback. Participants had to predict which of 50 criminals would be arrested for violent crimes, rather than just any crime. With feedback, humans performed the task with 83 percent accuracy—close to COMPAS’ 89 percent. Without feedback, human accuracy fell to about 60 percent. People tended to overestimate the risk of criminals committing violent crimes. The study didn’t investigate whether racial or economic biases contributed to the trend. In a third variation of the experiment that involved reviewing more detailed criminal profiles, software called LSI-R—which could consider 10 more risk factors than COMPAS, including substance abuse, level of education and employment status — outperformed people. Criminals with the highest risk of getting arrested and reincarcerated again, as ranked by people, included 58 percent of actual repeat offenders; LSI-R’s list had about 74 percent of actual reoffenders. (10)
Computer scientist Hany Farid of the University of California, Berkeley, who worked on the 2018 study, says that just because algorithms outmatch untrained volunteers doesn’t mean they should be trusted. (10)
Temming, Maria. “AI bests humans in crime predictions” Science News, Vol. 197, No. 5, March 2020, pp. 10-10.