Ethical dilemmas

Not long ago, I had a brief chat with a friend. I reported a recent ride in a vehicle where the driver and the car participated in a beta test of the vehicle’s manufacturer’s self-driving system. I sat in the back seat and watched as the driver removed his hands from the steering wheel and made a few inputs into a touchscreen display. The system worked reasonably well, giving us a comfortable ride until we passed through an area of bright sunlight that appeared in flashes because of a row of trees near the road that created stripes of light and shadow. Unable to adjust to the conditions, the self-driving feature turned itself off and required direct input from the driver, including steering and application of the accelerator, for the trip to continue. My comment to my friend was that I could easily imagine a technological fix to the problems of light and shadow but that there were more significant challenges to self-driving vehicles that challenged the lack of philosophical and ethical education of the engineers designing the cars.

My companion immediately referred to what has been dubbed “the trolley problem.” The trolley problem is a series of thought experiments that offer ethical dilemmas about being in a position to choose between killing several people and killing a single individual. In the most common presentation of the problem, a runaway train or trolley is headed toward a section of track where five people are standing. A bystander sees the impending collision and also that there is a switch that can be pulled to divert the runaway trolley onto a side track where only a single person will be killed. The choice presented by the experiment is the dilemma about whether to do nothing, in which case five people are killed, or to intervene, in which case only one person dies, but it is a person who was initially safe.

My conversation with my friend did not permit time for me to push beyond the usual simple math that is often employed in considering the trolley problem. In general, most people would assume that the death of one is preferable to the death of five and judge that there is a moral obligation to act to decrease the number of deaths. Had we had the time, I might have gone into much more detail. Still, all I was able to offer to the conversation at the time was a comment about the trolley problem standing in the line of an ethical discussion that was going on for centuries before trollies existed and that I thought that as a stand-alone thought experiment, it lacked the complexity and nuance that ought to be considered when evaluating self-driving vehicles.

I am not an academic philosopher, though I did study philosophy and the history of philosophy as an undergraduate student. I have, however, studied ethics enough and have run into enough ethical decisions in my life to be wary of simplistic solutions to complex problems. I fear using the trolly problem in the discussion of self-driving vehicles because it presents a relatively simple mathematical solution. The engineers designing the hardware and software of self-driving cars are practiced mathematical solutions. They can easily program a vehicle to quickly judge numbers and choose the action that will result in the smallest number of victims. The real world, however, rarely presents ethical problems as simple as choosing between one and five victims.

For starters, the trolley problem does not consider the possible danger of loss of life to the persons on the trolley. What if the sudden switch in tracks causes the trolley to tip over and risk the lives of the trolley’s occupants? The algorithms applied to self-driving cars must be able to weigh possible victims inside and outside of multiple vehicles involved in an accident scenario. As any accident investigation reveals, numerous factors and various decisions are involved in accidents. In the case of the self-driving car in which I was a passenger, the vehicle’s safety system relies on returning control to a human driver when the automatic system fails to have sufficient information to continue. In the scenario we experienced, the alertness or distraction of the person sitting in the driver’s seat is not programmed into the vehicle’s computer. Nor is that person's level of impairment. At a minimum, the driver was distracted by the touchscreen in the car. What if that person had also been talking on the phone, looking at the scenery, engaged in conversation, or otherwise distracted?

My philosophical and ethical education came with a distinct cultural bias. I studied ancient Greek and Roman philosophy and nineteenth-century German idealistic philosophy. While attending graduate school on the campus of the University of Chicago, I was aware that the philosophy department of that University reflected those biases, as well as a strong bias toward analytic philosophy. When considering some of the vast ethical challenges of the 21st Century, those biases can prevent consideration of tribal and indigenous ethics and Eastern philosophical and moral wisdom.

In the current system of education of software and hardware engineers, academic philosophy or ethics are not considered. Engineering schools do not typically offer philosophy or ethics as areas of study. The gap between the philosophy department of the University of Chicago and the engineering department of Stanford University is much greater than the physical distance between Illinois and California.

Yesterday, as I tutored my grandson in his work with middle school algebra, we discussed various kinds of single-variable problems. The text presented issues with only a single solution, including those for which the only solution is zero. It also presented matters that have an infinite number of solutions and problems that have no solutions. The exercise was to determine which type of formula was given. It is simple introductory algebra. The lesson was designed to teach students to discard problems that did not offer a single solution.

I wonder if self-driving car engineers have considered the possibility of the various problems from a mathematical perspective. Like middle school students, have they been taught to discard issues with no solution or infinite solutions?

Philosophers have known since ancient times that there are ethical problems with multiple solutions and moral issues with no solution. Their experience might inform engineers who write software for self-driving cars. It is far more complex than a thought experiment involving an imaginary trolley.

Made in RapidWeaver