Esprii Chapman – Notes on Algorithms, Agency, and Respect for Persons (Article 5 of 10)

Note from Esprii:

This article mainly covers sources that I’ve already read and taken notes on, however it was useful in learning more in depth information about certain things such as Loomis’ case in Washington which has become an integral part of my research.

“The capacity to self-govern—to develop and endorse one’s sense of value and act according to those values as one sees fit—grounds a number of moral claims.” (550)

  • We have not developed AI far enough to have its own consistent/reliable morals

“To understand our first argument, let’s return to the IMPACT case. There is a plausible argument for DC schools using IMPACT. The algorithm uses complex, data-driven methods to find and eliminate inefficiencies, and it purports to do this in an objective manner. Its inputs are measurements of performance and its outputs are a strict function of those measurements. Whether a teacher has, say, ingratiated herself to administrators would carry little weight in the decision as to whether to fire her. Rather, it is (ostensibly) her effectiveness as a teacher that grounds that decision. Nonetheless, DC schools’ use of IMPACT was problematic. This is in part because IMPACT’s conclusions were epistemically flawed. A large portion of a teacher’s score is based on an Individual Value-Added measurement (IVA) of student achievement (Walsh and Dotter 2014), which seeks to isolate and quantify a teacher’s individual contribution to student achievement on the basis of annual standardized tests (Isenberg and Hock 2012). IVAs are poorly suited for this task. DC teachers work in schools with a high proportion of low-income students. At the time IMPACT was implemented, even in the wealthiest of the city’s 8 wards (Ward 3) nearly a quarter of students were low-income, and in the poorest ward (Ward 8) 88 percent were low income (Quick 2015).” (551)

Loomis v. WI. Our argument that algorithmic systems conflict with persons’ autonomy extends to the Loomis case. To begin, COMPAS is only moderately reliable. Researchers associated with Northpointe assessed COMPAS as being accurate in about 68 percent of cases” (556)

  • It shouldn’t be used as the sole device, which it was in this case.

“More important is that COMPAS incorporates numerous factors for which defendants are not responsible. COMPAS takes a number of data points about a defendant’s criminal behavior, history, beliefs, and job skills, and generates a series of risk scales.” (556-57)

  • More about COMPAS.

“One important criticism of algorithmic systems is that they lack transparency. Such systems can be opaque because they are complex, protected by patent or trade secret, or deliberately obscure (Pasquale 2016). But it is worth asking why transparency is important. Transparency may be important for instrumental purposes, and in the case of public use of algorithms, transparency may be important for accountability (Powles 2017).17 Our view is that transparency is also integral for respecting agency.”

“One of Loomis’s primary complaints in his appeal is that COMPAS is proprietary and hence not transparent. Specifically, he argued that this violated his right to have his sentence based on accurate information.” (560)

  • He claimed that private companies do not have the right to solely make a federal-level judgement because they are not transparent; however, judge’s are not terribly transparent either.
  • “The Loomis court disagreed. It noted that Northpointe’s Practitioner’s Guide to COMPAS explained the information used to generate scores, and that most of the information is either static (e.g., criminal history) or in Loomis’s control (e.g., questionnaire responses). Hence, the court reasoned, Loomis had sufficient information and the ability to assess the information forming the basis for the report, despite COMPAS being proprietary.” (560)

“There is yet another way in which algorithmic systems conflict with agency. They can aggregate individuals’ interests rather than regarding each group-member’s interests as separate. Call this aggregation of interests “herding.” In Rawls’s terms, our criticism is that algorithmic systems fail to “take seriously the distinction between persons”” (561)

  • It herds people into different categories rather than examining them as a singular person and everything that they have going for themselves.

“People who have a great deal of cash that they wish to hide (for example to hide an illegal enterprise or to avoid tax liability) may mix that cash with other money (perhaps from legitimate sources) to avoid suspicion. Of course decisions are not the same as cash, and our argument does not turn on the analogy. Our point is instead that, like money laundering, agency laundering involves obscuring the source of something dubious by mixing it with something similar, but seemingly above-board.” (565)

“The literature on algorithmic decision systems points to ways that such systems can be unfair, can cause harm, discriminate, and thwart accountability. Our arguments here do not discount those moral issues. We have made the case that the moral and legal salience of algorithmic systems requires attention to issues of agency, autonomy, and respect for persons. Algorithmic systems may govern behavior in ways that an agent cannot reasonably endorse. They may deny an agent information to which she is entitled. They may fail to respect boundaries between persons. And they may be deployed to launder agency. One issue worth addressing here is whether our arguments identify problems that are unique to algorithmic decision systems. The moral concerns we describe can exist in any kind of decision system. Decision processes that rely on (inter alia) committee or bureaucracy can be processes that individuals cannot reasonably endorse, can be opaque, can fail to treat people as separate individuals, and can launder agency. Our task here, though, has been to examine several types of moral concern in decision-making and how those relate to algorithmic decision processes. That those same concerns apply beyond algorithms shows that the root moral concerns are deeper. Moreover, there are features of algorithmic decision-making that will make some of the concerns we describe particularly acute. First, because of a kind of aura that surrounds mathematical models, people end up trusting them disproportionately (O’Neil 2016, Zarsky 2016, Citron 2008). Second, because such systems are difficult to understand and many believe them to be difficult to understand, people may be reluctant to criticize them. Of course, humans, committees, and bureaucracies are difficult to understand as well, but we may have an intuitive grasp of the kinds of faults in reasoning that they exhibit (motivated reasoning, groupthink, various cognitive heuristics). Lastly, in addition to being complex, algorithmic systems are often proprietary and protected by intellectual property rights, and hence enjoy legal protections that other systems do not (Pasquale 2016).”

Rubel, Alan. Castro, Clinton. Pham, Adam. “Algorithms, Agency, and Respect for Persons.” Social Theory and Practice, Vol. 46, No. 3, July 2020, pp. 547–572

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s