According to experts in this field, the use of machine learning for risk assessment sentencing tools is currently in the early stages of development and implementation. However, experts reported that machine learning has many potential benefits to improve sentencing outcomes. For example, one policy simulation indicated that, when decisions are based on the tool’s outputs, jail populations could be reduced by 42 percent with no increase in crime rates, including violent crime. (75)
In another example, civil rights groups have raised concerns that predictive policing systems are not adequately audited and monitored on an ongoing basis to assess if police are unjustifiably targeting specific neighborhoods. The groups argue that predictive policing algorithms may lead to biased criminalization of communities of color by further concentrating law enforcement activities in those communities. (76)
For example, many early stage AI approaches to law enforcement are proprietary, and their algorithms are not available to the public. In addition, we and others have raised concerns about limited testing on the systems for accuracy. In 2016, we found that FBI conducted only limited testing to ensure the accuracy of its face recognition capabilities. For example, the agency had not taken steps to determine whether partner law enforcement agencies’ FRT systems were sufficiently accurate and did not unnecessarily include photos of innocent people as investigative leads. We recommended that FBI take steps to improve transparency and better ensure that face recognition capabilities are being used in accordance with privacy protection laws and policy requirements and to ensure that FRT systems are sufficiently accurate. (77)
As prisons automate the collection of inmate data, risk assessment tools designed to predict inmates’ future behavior outside the prison setting should have more information to process and statistically assess in order to enhance the risk assessment tools’ predictive capacity. (78)
- But that is inherently biassed due to the majority of the prison population to be black.
From a policy perspective, some experts on predictive policing, face recognition, and risk assessments cited the need for federal oversight needed to regulate the use of AI in the criminal justice arena. (78)
- I agree with this
Experts agreed that a lack of transparency into the data used by proprietary algorithms can contribute to privacy, bias, and accuracy concerns. As a result, experts contended that enhancing existing federal regulations/policies and/or establishing a federal regulatory body to assess these AI applications’ use could have benefits. (78)
Some experts recommended that the federal government establish a federal regulatory agency to perform independent assessments of the accuracy and potential bias of AI systems. (79)
NA. “Artificial Intelligence: Emerging Opportunities, Challenges, and Implications” GAO Reports, March 2018, pp. 1-94.