Esprii Chapman – Notes on Fairness, Accountability and Transparency: Notes on Algorithmic Decision-Making in Criminal Justice (Article 1 of 10)

General Overview:

“First, can we trust technology to be fair, especially

given that the data on which the technology is based are biased in various ways? Second, whom can we blame if the technology goes wrong, as it inevitably will on occasion? Finally, does it matter if we do not know how an algorithm works or, relatedly, cannot understand how it reached its decision? I argue that, while these are serious concerns, they are not irresolvable. More importantly, the very same concerns of fairness, accountability and transparency apply, with even greater urgency, to existing modes of decisionmaking in criminal justice. The question, hence, is comparative: can algorithmic modes of decision-making improve upon the status quo in criminal justice? There is unlikely to be a categorical answer to this question, although there are some reasons for cautious optimism.” (126)

  • Main argument of the paper written clearly in the abstract.

“The objective of this paper is to respond to a variety of ethical concerns that have been raised in the legal literature surrounding these emerging technologies, specifically in the

context of criminal justice.

These concerns can be clustered under the headings of fairness, accountability and transparency.” (126)

  • Listen plainly, this is what they are covering.

“For the most part, I conclude that these are serious, but resolvable, concerns. More importantly, the very same concerns arise with regard to existing modes of decision-making in criminal justice. Existing systems of criminal justice already have serious problems with fairness, accountability and transparency. The question, hence, is comparative:

can algorithmic modes of decision-making improve the status quo ante in criminal justice?” (127)

  • Conclusions at the beginning.

Within the text:

“Although there are significant differences between them [the many different bits of machine learning in the justice system], in general, what is common to those technologies is that they recommend treatments – police interventions, bail determinations, sentences – on the basis of mathematically structured processing, typically over extensive datasets.” (126)

  • General explanation on what their view of machine learning in the criminal justice system means. They argue that the datasets are over extensive.
  • They are not trying to use this against them in this quote, they eliminate this from their point of view later in the article.

“Survey data suggest similar rates of use of illicit drugs

among Blacks and Whites, although Black drug users are more likely to have criminal justice contact than White drug users. One explanation for this phenomenon is that police have prioritised enforcement actions on open-air drug markets, primarily used by African-Americans, rather than residential

transactions favoured by Whites; another concerns racial bias in policing. In cases like this, predictive variables that are neutral on their face turn out to exhibit racially skewed correlations that, on account of their aetiology, it would be unfair to entrench.” (127)

  • This is what I have had in mind as a focus. 
  • Racial bias against black individuals based on their easy-to-expose open-air drug trades as well as clear systemic racism in american legal systems and law enforcement have led to this.
  • When you’re already being targeted more out of racism, whether casual or blatant, it’s going to be harder to hide a drug trade compared to people who are not being targeted already. I can predict that the “neutral on their face” algorithms predict crime based on historical crime with racial bias.

“Even if an algorithm does not expressly rely on race, it is likely to positively weight factors such as prior arrests and criminal record that are strongly correlated with race.” (127)

  • My point proven in the last statement (I can predict that the “neutral on their face” algorithms predict crime based on historical crime with racial bias.)

“Otherwise put, basing policy on flawed statistical inputs can generate a self-fulfilling prophecy. If policy-makers expect ex ante to find more crime among group A than group B, then it is possible that they will find this expectation validated ex post, but only because they have spent more time looking for crime among members of group A than among members of group B.” (127-128)

  • A feedback loop being created here.

“Since it is rather plausible that human decision-makers also consider many of the same factors, such as a history of arrests and convictions, that correlate with racial identity, self-fulfilling prophecies are likely to be an inevitable problem in criminal justice.” (128)

Note from Esprii: Racial skews in the justice system already exist and most judges will use them, whether intentionally or not, to predict and sentence against historical evidence that lends itself to racial bias. In order to fully eliminate this, life in America would need to be restarted in its entirety so that no racism ever started.

“Kleinberg and his co-authors trained a machine-learning algorithm on a dataset comprising bail decisions from New York City between 2008 and 2013. They found that the algorithm was able to more accurately predict crime than human judges, even when it was constrained to ensure that the racial composition of detainees tracked that of arrestees. ‘[I]it is possible,’ they write, ‘to reduce the share of the jail population that is minority – that is, reduce racial disparities within the current criminal justice system – while simultaneously reducing crime rates relative to the judge.’

If this result turns out to be robust, that would count in favour of preferring computerised risk assessment over

clinical human judgment, for the former would strictly dominate the latter: it would show that algorithmic assessment outperforms human judgment at both minimising introduced racial skew and improving substantive accuracy.” (128)

“My proposal [to determine if there is racial bias] is to compare the racial composition of the outputs of decisions made at that stage to the racial composition of its inputs, in order to clearly compare the degree to which that mode of decision introduces racial skew to the degree to which it improves accuracy against other modes of decision, including

random selection.” (129)

  • Compare one mode of predicting against another and as a control, against a random selection.

“Compounded racial bias is a structural problem in criminal justice, regardless of whether we utilise structured algorithms or human judgment at a given stage of the process.” (129)

  • This hasn’t been a recent issue, it’s just being more exposed now that harder math and evidence is playing a larger part in our criminal justice system.

“any computerised algorithm, no matter how sophisticated, and no matter the size and composition of the dataset on which it was trained, will inevitably yield recommendations that are too crude, and, in particular, cruder than what could be achievable by human judgment.” (130)

  • This raises another question: will we ever have computers that can replicate a human brain in all of its major and minor details?

“There is certainly a sense in which, trivially, any two cases are distinct from each other. For individualised tailoring to be a substantive objection, however, it must amount to the claim that either: 1 it is not the case that like cases should be treated alike; or 2 although like cases should be treated alike, no two cases are similar along all relevant dimensions; or 3 although like cases should be treated alike, and although cases may be similar along all relevant dimensions, computerised algorithms are not sensitive to all relevant dimensions.” (130)

  • A hard decision
  • “treat people equivalently on the basis of a discrete range of considerations as indicated by law.” (130)
  • “More plausibly, and more sensibly, one might interpret (1) by observing that we can often identify some range of equally justifiable outcomes for a given case. Within that range, no one outcome is demonstrably superior to others. Hence, while like cases should be treated roughly alike, it is not required that they be treated precisely alike.” (130)

“Part of the reason line officials in criminal justice tend to have as much as discretion as they do is no doubt because the kinds of decisions they are called upon to make – whether to stop someone for questioning, where to patrol, whether to order someone detained pending trial, how to sentence and so forth – do not always have unique correct answers, at least relative to the evidence available at the time of decision.” (131)

  • Once again, can a computer be made to make these decisions reliably?
  • Can there be true artificial intelligence?
  • This brings it back to what I wanted to argue on in my ST this year. That I think that using AI in the criminal justice system is a fair use, but we’re not there YET.

“A substantial part of the appeal of an empirically validated risk assessment is its ability to screen out incorrect outcomes more reliably than clinical human judgment. It may be, for instance, that popular views among judges and prosecutors about risk factors for further offending – such as employment status or drug use – turn out, upon analysis of the evidence, to be simply incorrect or of trifling significance. Furthermore, while these types of tools are not immune to other forms of racial bias, they are not prone to the kinds of unconscious or implicit biases that might be at work in lawyers and judges making bail or sentencing determinations.” (131)

  • The article states the appeal of using these algorithms to influence or outright sentence someone, but once again, they are simply mimicking at a much more binary level, what the past outcomes of somewhat similar trials are. It’s not yet equipped to make these sorts of calls.
    • I will claim that we can use them in conjunction with our currently flawed legal system, but at their current level, they can do an equal amount if not more damage in playing larger parts in sentencing and justice.

“even if no two individuals are exactly alike in terms of whether their release pending trial would traumatise a community, what the law deems relevant is only whether the release of any of them, considered on its own, would do so.” (132)

  • Talking about a law office run by humans and if someone is considered a threat based on the people at the law office’s morals vs. if they’re considered a threat against a historical dataset of similar cases, like machine learning does.
    • Then again, our morals are derived from basically a large historical dataset as well, so they’re somewhat comparable.

Note from Esprii: Comparatively, the two different methods, completely human, and completely algorithmic, are relatively the same in the sense that they both compare similar cases as the same, however the judge can still pinpoint fine-detail in their actions.

  • “To be sure, there are contexts in which the law is more sensitive to comparative concerns, such as fairness in sentencing, But, just as treating cases ‘alike’ need not mean alike in every possible respect (since there can be a margin for permissible variation), regarding cases as similar does not entail regarding them as ultimately similar – that is, similar in every least respect. There is a limit to the amount of detail and nuance to which the law is, and perhaps should be, sensitive. That leaves interpretation (3), the weakest interpretation of the individualised tailoring objection. What is clear about this interpretation is that it is an empirical claim about the capabilities of algorithmic decision-making.” (132)

“Potentially life-changing decisions about arrest, bail, plea and sentence are often made rapidly, on a limited informational basis, by people who suffer from all the usual cognitive biases, imperfect heuristics and unconscious influences with which we are familiar.” (132)

  • Again, backing up my claim.

“There is, I think, a sensible interpretation of individualised tailoring that survives these objections, but it is both limited and contingent. This is the concern that existing statistical and predictive models might be too crude to reliably capture those features of legal cases that we would expect of them. For instance, one might argue that, although A and B might share a similar background, causing a predictive instrument to assign similar recidivism scores, they might nevertheless differ in a way that affects their future riskiness; and that a human judge might be alert to these differences in a way that a purely statistical instrument is not.” (133)

  • They are not reliable enough YET to see these minor differences. They can predict based on broad statistical analysis, however they are “too crude” (133) to predict based on fine detail.
  • “However, it is a concern that should become less pressing over time, as those technologies mature.” (133)

“Even if human judgment at its best is capable of very finely nuanced and calibrated judgment, it is far from clear that this is generally true of our systems of ‘mass justice’ – particularly in the context of relatively low-level offences that do not garner significant investment of time and attention. In my view, the greatest promise of algorithmic decision-making in criminal justice lies in improving the accuracy of decision-making in the context of everyday, routine cases, rather than in the rarer instances of high-stakes, and intensely litigated, cases.” (133)

  • Starting small and not going big immediately would help train it up.

“A traditional way of expressing this concern is in the language of due process. Insofar as how an algorithm classifies you is impactful, particularly on traditional liberty interests, then you are entitled to a range of procedural rights – an oral hearing, to call witnesses, challenge the evidence against you, cross-examine and so forth. But algorithms deny you that process. They classify you based on information fed into it by a database – information that may potentially be erroneous or explained away.” (134)

  • Algorithms are currently set up as effectively impassible firewalls that you can’t argue against, even when making decisions they are not suited to make.

“a person who is classified as high-risk by a risk-assessment device would, presumably, continue to have the right to challenge the use that the jurist made of it, traditional notions of due process do not support the view that he has the right to contest the design parameters of the algorithm itself. This is because, within broad limits (having to do, for instance, with racial discrimination), neither statutory nor constitutional law mandates any particular means of assessing an accused person’s risk.” (134)

  • Anyone has a legal right to challenge an algorithm. To not be blocked by it’s firewall. There are no laws against challenging an algorithm.

“The first [concern] is an unease with adopting algorithmic decision-making in high-stakes areas like criminal justice because of how poorly understood the technologies in question are.” (135)

  • Like I’ve been saying, they are poorly understood, and as such, it can be said that they are poorly/inaccurately built.

“some algorithmic devices in use today in criminal justice contexts do not rely on machine-learning techniques. For instance, the risk-assessment tool developed by the Arnold Foundation is based on relatively straightforward regression models. Hence, even if most people, including most judges, lawyers and accused persons, do not understand how regressions work, that is no objection to relying on a risk-assessment device of this kind.” (135)

  • It’s not transparent unless the people using them understand how they’re being used.
    • This brings up another question: If the judges do not understand it, but say an advisor is hired who does understand it and can offer advice to the judge based on the raw output, is that a good plan instead of making every judge who uses a predictive algorithm understand said algorithm?

“Machine-learning techniques, neural networks in particular, raise a distinct set of concerns. Machine learning is ‘atheoretical’, in that a machine-learning algorithm ‘learns’ on its own to draw correlations between outcomes and inputs, including inputs that would not make much sense to a human.22 In the case of aeroplanes, bridges and pharmaceuticals, even if lay persons do not understand how they work, still experts do. There are technicians, engineers and chemists who understand the mechanism. In contrast, in the case of a machine-learning algorithm, it may be the case that no one really understands the basis upon which it is drawing its correlations. Those correlations might be quite reliable, but it might be that no one is in a position to articulate quite why they are reliable and this surely does raise distinctive concerns about intelligibility.” (136)

  • Transparency is just as important in machine learning algorithms, especially the hidden layers.

“private actors who are driving the development of most of these devices, from facial recognition to risk assessment to predictive policing software, have incentives that can be trusted to ensure that they reliably operate in the public interest.” (137)

  • Background checks and research/evidence that these are coming from unbiased groups based on unbiased information which, in this day and age, is nearly impossible.

“improving upon the status quo in criminal justice – particularly in the US – is a low bar. This is because the status quo in criminal justice is deplorable. Criminal justice is perennially underfunded, its institutions are often reflexively resistant to change, subject to emotionally charged populist campaigns, and are shot through with unfair bias of all kinds.” (138)

  • Before we can have half decent algorithms, we need to eliminate bias and eliminating bias is next to impossible with the current status quo.

“One might worry that the increased use of predictive algorithms, no matter how accurate, reliable and fair they become, would amount to turning criminal law and criminal justice over to technocrats and experts. One might regard this as a loss, for it would transform criminal law from the public re-enactment of a society’s moral habitus into the coldly calculating work of minimising net social harm.” (138)

  • A valid concern; working to ensure that we have both technology and humans in an office of legal power might be a better idea. One would serve society, the other would serve the least amount of net social harm.

Brownsword, Rodger. Vincent, Chiao. “Fairness, accountability and transparency: notes on algorithmic decision-making in criminal justice.” International Journal of Law in Context, vol. 15, unprovided, 2019, pp. 126-139

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s