Esprii Chapman – Notes on Moral Machines: Teaching Robots Right From Wrong (Book 3 of 5)

In some of those situations, the relevant judgment would involve ethical considerations, but the driverless trains of today are, of course, oblivious to ethics. Can and should software engineers attempt to enhance their software systems to explicitly represent ethical dimensions? We think that this question can’t be properly answered without better understanding what is possible in the domain of artificial morality. (14)

Wallach, Wendell. Moral Machines (p. 14). Oxford University Press. Kindle Edition. 

If multipurpose machines are to be trusted, operating untethered from their designers or owners and programmed to respond flexibly in real or virtual world environments, there must be confidence that their behavior satisfies appropriate norms. (17)

Wallach, Wendell. Moral Machines (p. 17). Oxford University Press. Kindle Edition. 

There is an immediate need to think about the design of AMAs because autonomous systems have already entered the ethical landscape of daily activity. For example, a couple of years ago, when Colin Allen drove from Texas to California, he did not attempt to use a particular credit card until he approached the Pacific coast. When he tried to use this card for the first time to refuel his car, the credit card was rejected. Thinking there was something wrong with the pumps at that station, he drove to another and tried the card there. When he inserted the card in the pump, a message flashed instructing him to hand the card to a cashier inside the store. Not quite ready to hand over his card to a stranger, and always one to question computerized instructions, Colin instead telephoned the toll-free number on the back of the card. The credit card company’s centralized computer had evaluated the use of the card almost 2,000 miles from home with no trail of purchases leading across the country as suspicious, and automatically flagged his account. The human agent at the credit card company listened to Colin’s story and removed the flag that restricted the use of his card.

Wallach, Wendell. Moral Machines (p. 17). Oxford University Press. Kindle Edition. 

Note from Esprii: This book is, at this point, quite outdated in talking about current technology. On page 20 it gives technological examples from 2005-2008.

However, the line between faulty components, insufficient design, inadequate systems, and the explicit evaluation of choices by computers will get more and more difficult to draw. As with human decision makers who make bad choices because they fail to attend to all the relevant information or consider all contingencies, humans may only discover the inadequacy of the (ro)bots they rely on after an unanticipated catastrophe.

Wallach, Wendell. Moral Machines (p. 22). Oxford University Press. Kindle Edition. 

Perhaps you think we are mistaken if we believe that engineering ideals should prevail over corporate objectives. Credit card companies, after all, are not contractually obligated to approve any purchase, so there is no ethical issue at all involved in the use of automated approval systems. But to accept this line of reasoning is already to take a position on a substantive moral question, namely, whether corporate morality is limited to contractual issues.

Wallach, Wendell. Moral Machines (p. 31). Oxford University Press. Kindle Edition. 

In fact, it seems to us that all (ro)bots have ethical impacts, although in some cases they may be harder to discern than others.

Wallach, Wendell. Moral Machines (p. 33). Oxford University Press. Kindle Edition. 

Why are ethicists useful in designing AI?

  • “A well-trained ethicist is taught to recognize the complexity of moral dilemmas, and is likely to be sensitive to the inadequacy of any one approach meant to cover the range of challenges the AMA might confront.”
    • Wallach, Wendell. Moral Machines (p. 75). Oxford University Press. Kindle Edition. 

What if we are already the most sophisticated ethical machines you can get? Like while computers can compute things super fast and we can do it, by comparison, relatively at a snail’s pace, what if our brain is just a complete one of these neural networks and the ones we’re trying to make are even slower than what we are?

Expecting AMAs to deal immediately with all of these issues is impracticable, but our basic position is that any step toward sensitivity to moral considerations in (ro)bots, no matter how simplistic, is a step in the right direction.

Wallach, Wendell. Moral Machines (p. 76). Oxford University Press. Kindle Edition. 

Chapter 6: Top Down Morality-

This talks about programming ethics into robots based on background information and as such, framing that background information. Asimov’s three laws are brought up as a possible, but not very practical, way of framing morals. 

It brings up Asimov’s Three Laws which are basically the rules of a slave, e.g. “you can’t do anything to me, but I can command you in every aspect” aside from causing harm to other humans. It might sound ethical on surface level, but if (Ro)bots eventually would be capable of something along the lines of free thought and emotion, it would make them slaves, something inherently unethical.

Probably the easiest way to classify and program some sense of “morals” into a machine is to do it by assigning values to whether or not it is beneficial for the majority of people to go with one choice over another (aka utilitarianism).

Chapter 7: Bottom-Up and Developmental Approaches-

This chapter is not too terribly relevant, but here’s a quick summary of it from Oxford University:
“This chapter surveys bottom‐up approaches to the development of artificial moral agents. These approaches apply methods from machine learning, Kohlberg’s theory of moral development, and techniques from artificial life (Alife) and evolutionary robotics, such as evolution through genetic algorithms, to the goal of facilitating the emergence of moral capacities from general aspects of intelligence. Such approaches hold out the prospect that moral behavior is a self‐organizing phenomenon in which cooperation and a shared set of moral instincts (if not a “moral grammar”) might emerge – this despite the logic of game theory which seems to suggest only self‐interested rationality can prevail in an evolutionary contest. A primary challenge for bottom‐up approaches is how to provide sufficient safeguards against learning or evolving bad behaviors as well as good.” (link)

I believe that the main point of this is in this quote “A primary challenge for bottom‐up approaches is how to provide sufficient safeguards against learning or evolving bad behaviors as well as good.”

Allen, Colin. “BOTTOM‐UP AND DEVELOPMENTAL APPROACHES.” Oxford Scholarship Online, 2009, shortened link, Accessed 09/29/2020

Chapter 8: 

Another summary quote from Oxford University states:

“The topic of this chapter is the application of virtue ethics to the development of artificial moral agents. The difficulties of applying general moral theories in a top‐down fashion to artificial moral agents motivate the return to the virtue‐based conception of morality that can be traced to Aristotle. Virtues constitute a hybrid between top‐down and bottom‐up approaches in that the virtues themselves can be explicitly described, but their acquisition as moral character traits seems essentially to be a bottom‐up process. Placing this approach in a computational framework, the chapter discusses the suitability of the kinds of neural network models provided by connectionism for training (ro)bots to distinguish right from wrong.”

From the summary I’ve deduced that the main point of the chapter was to emphasize that robots would need to learn and develop their own morals in a bottom up fashion, which no one has managed to implement effectively and autonomously. Basically we need more work to get AI running in a proper fashion to run this way.

Allen, Colin. “MERGING TOP‐DOWN AND BOTTOM‐UP.” Oxford Scholarship Online, 2009, shortened link, Accessed 09/29/2020

“On a pure deontic logic approach, it would be necessary to specify in advance some higher principles that would allow one to prove that the physician should (or shouldn’t) try again to persuade the patient to accept the treatment. However, such principles aren’t always specifiable in advance, and even experts may be unable to explain the reasoning that underlies the judgments they would make about particular cases.” (127)

Wallach, Wendell. Moral Machines (p. 127). Oxford University Press. Kindle Edition. 

  • This touches on the fact that there are effectively an infinite amount of outcomes to a single situation, it’s hard to explain them all to a machine in a top-down way without significant AI improvements.

“MedEthEx uses an inductive logic system based on the Prolog programming language to infer a set of consistent rules from the judgments medical ethics experts have provided about specific cases. The cases used to train the system are represented by sequences of numbers whose values, from +2 to ∇2, indicate the extent to which each prima facie duty is satisfied or violated in that situation.” (127)

Wallach, Wendell. Moral Machines (p. 127). Oxford University Press. Kindle Edition. 

  • “A prima facie duty is a duty that is binding (obligatory) other things equal, that is, unless it is overridden or trumped by another duty or duties.”
  • Decent explanation and some transparency on how the MedEthEx AI is being developed to understand things. It’s still rather binary, but it’s getting there.

“To teach engineers about utilitarianism and deontology could be counterproductive, because philosophers tend to go straight for the controversies. Their goal is to understand what distinguishes the various theories. To repeat the point we made in chapter 5, this is the difference between the agent-centered perspective of the engineer and the judge-centered perspective of the philosopher. To the typical undergraduate engineer, however, it can seem as though the philosophers’ approach to ethics is just the game of choosing whichever theory lets you justify what you intended to do anyway.” (130)

Wallach, Wendell. Moral Machines (p. 130). Oxford University Press. Kindle Edition. 

  • Arguing that utilitarianism, while the easiest route to go, is by no means the best one. It can be deduced that it’s better to create something like our mind than to compare hard statistics.

“Truth-Teller and SIROCCO are both decision support tools rather than autonomous decision makers. Truth-teller helps users find relevant comparisons between two cases; McLaren conceives of SIROCCO as a tool for collecting relevant information from a database of cases and codes. Nevertheless, one can imagine a future case-based AMA constantly perusing databases to update its understanding of rules and their application in exceptional situations. In this way, it might be possible to design an AMA whose application of rules or other constraints dynamically accommodates legal precedents and emerging guidelines.” (131)

Wallach, Wendell. Moral Machines (p. 131). Oxford University Press. Kindle Edition.

“Danielson’s Norms Evolving in Response to Dilemmas (NERD) project focuses on using software to assist people in the democratic negotiation of solutions to ethical issues, rather than serving as an impartial judge or arbiter. The NERD project attempts to uncover the full range of moral views held by people from diverse backgrounds (instead of the extremes that form the focus of most philosophical arguments). Danielson suggests that three lessons from his work with NERD may feed into the design of autonomous moral agents. First, AMAs will need ways of managing reciprocity with a variety of different interactors (“kids, cats, kibitzers, and evil-doers,” as he puts it). Second, there will not be a one-size-fits-all moral agent, but a variety of different agents filling different roles and suited for different environments. Third, people and artificial agents will need advanced tools to help them see the ethical consequences of actions in a complex world.” (133)

Wallach, Wendell. Moral Machines (p. 133). Oxford University Press. Kindle Edition. 

“Is reasoning about morally relevant information all that is required for the development of an AMA? Even though Mr. Spock’s capacity to reason far exceeded that of Captain Kirk in the Star Trek series, the more emotional and intuitive Kirk was presumed by the crew of the Enterprise to be a better decision maker. Why? If (ro)bots are to be trusted, will they need additional faculties and social mechanisms, for example emotions, to adequately appreciate and respond to moral challenges? And if so, how will these abilities be integrated with the top-down and bottom-up approaches to moral decision making that we imagined a supportive ethicist providing to the engineering colleague who came looking for help?” (139)

Wallach, Wendell. Moral Machines (p. 139). Oxford University Press. Kindle Edition. 

  • To trust someone fully they need to be like us, in full relation of social and moral decisions. To be able to understand unspoken social cues.
  • “The importance of emotional intelligence and social skills raises the question of the extent to which an artificial agent must emulate human faculties to function as an adequate moral agent. Morality is a distinctly human enterprise. Thus it is natural that humans would try to reproduce human skill sets in designing an AMA that lives up to humans’ moral standards. The substantiation of human skills within AI holds a fascination of its own.” (141)

“The dominant philosophical view, going back to the Greek and Roman Stoic philosophers, has been that moral reasoning should be dispassionate and free of emotional prejudice. This has been presumed to mean that emotions should be banned entirely from moral reflection. Stoics believed that taming one’s passionate “animal nature” and living under the rule of reason was the key to moral development. Among later moral philosophers, many shared the view that emotions were of little or no help in dealing with one’s moral concerns.” (143)

Wallach, Wendell. Moral Machines (p. 143). Oxford University Press. Kindle Edition. 

  • An example of ancient philosophy that can be applied to today’s machines morally.

“psychologists Peter Salovey and John “Jack” Mayer introduced the concept of emotional intelligence, an idea that was later popularized in the title of the 1995 bestseller by journalist and science writer Daniel Goleman. The phrase emotional intelligence captures the understanding that there are dimensions of intelligence other than IQ. The awareness and management of one’s own emotions, learning from the information implicit in emotions, and recognizing the emotional states of those with whom one interacts are all special forms of intelligence.” (144)

Wallach, Wendell. Moral Machines (p. 144). Oxford University Press. Kindle Edition. 

“Three interrelated principles illuminate how sensory processing could develop into a sophisticated system for selecting among different actions or behavior streams: (1) emotions have valences; (2) organisms are homeo-static systems; and (3) emotional systems learn through reinforcement of responses to stimuli that have led to successful attainment of goals, and decay of responses to those that have failed to do so. To say that organisms should be understood as homeostatic systems means that they naturally try to reestablish equilibrium after each divergence from a stable range or comfort zone.” (147)

Wallach, Wendell. Moral Machines (p. 147). Oxford University Press. Kindle Edition. 

“The capacity to understand and empathize with pain is likely to be an important dimension in the development of AMAs. Pain sensations in biological creatures depend on specialized receptors called nociceptors—neurons that are dedicated to detecting noxious stimuli.” (151)

Wallach, Wendell. Moral Machines (p. 151). Oxford University Press. Kindle Edition. 

  • But is putting pain receptors in robots ethical in and of itself?

“There are three parts to alleviating the nearly universal frustration users experience working with stupid technology: 

(1) Detecting the emotional frustration of the user: This can vary from recognizing the repeated typing of characters to the use of a specially designed interfaces, for example, the mouse designed by Reynolds, which is sensitive to how much pressure the user places on it. 

(2) Putting that frustration in context: Repeated typing, for example, may signal difficulty in spelling or finding the right synonym. Random characters produced by dragging one’s hand across the keyboard suggest a deeper frustration. 

(3) Responding or adapting to the frustration in a way that potentially solves the problem and, at the least, does not further frustrate the user: Having the computer system simply ask, “Is something wrong?” with either text on the screen or through a speech synthesizer might begin a process of alleviating frustration.” (153-154)

Wallach, Wendell. Moral Machines (pp. 153-154). Oxford University Press. Kindle Edition. 

Note from Esprii: I’m making the conscious decision to just stop reading this book where I am at as it feels like it’s just repeating its point over and over with the occasional sprinkle of another point mixed in. It’s being destructive to not only the topic it’s on, but also to my mental state, so I’m calling it quits here. If I find out I need more later, I’ll just be kicking myself I guess…

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s