Though it is not as “new” as when I first started putting these ideas together, there are still controversies and questions about the prevalence of self-driving vehicles. I have no desire for any destruction as a result of some computer malfunction of a self-driving vehicle, but its reality still makes these thoughts relevant. Further relevance to this topic, though not necessary to follow along with this article, is the addition of the new joint document from the Vatican’s Dicasteries for the Doctrine of the Faith and Culture and Education “on the Relationship Between Artificial Intelligence and Human Intelligence” called Antiqua et Nova (Ancient and New).
The new technological frontier that is only becoming more ubiquitous, more powerful and more disquieting is the development of artificial intelligence. While the rise of machines and the replacement of humans has always been science fiction, the questions surrounding these possibilities are becoming science fact. In seemingly every story that includes A.I. of some kind, there are ethical questions that are raised. Most of the time these questions are directed at the human characters of the story and indirectly to the audience. But what of the machines themselves? How would they respond to ethical dilemmas? Is this not part of the problem?
One of the most famous ethical dilemmas of the last century involves technology slightly more primitive to that which the self-driving cars in question use. In 1967, British philosopher Philippa Foot introduced the “Trolley Problem” to help understand the deeper questions behind one’s ethical choices (for many of you already very familiar and even fatigued by the trolley problem, don’t stop reading as this is not simply a rehashing of it, I promise). It falls under the category of meta-ethics because it shows us what informs our ethics. Its fundamental, metaphysical nature makes it an extremely important question as well as a deeply frustrating one for some because it does not just “give the right answer.” Simply put, imagine there is an unstoppable trolley moving down a track and tied to that track are four railway workers. There is a diverted track and you hold the lever that can save the four. However, on that diverted track is one worker, who will die if you pull the lever.
Do you? And most importantly, why or why not?
Regardless of your answer to the first question, your answer to the second question helps you understand what informs your ethical system. A common answer given is “yes,” and because, “four lives are greater than one” (or some effect). This equates to a utilitarian ethic of “greatest good for the greatest number” associated with Jeremy Bentham and John Stuart Mill. Various wrinkles can be thrown in, but it does not change the utilitarian approach. Another common answer is “no,” because, “one must never actively hurt another person” (essentially the Hippocratic Oath). This amounts to a deontological ethical system of obedience to a universal law associated with Immanuel Kant. These are simplest forms of the two primary modern ethical systems one finds. They are also woefully insufficient foundations for ethics.
Ironically, these two systems are also the only two of which Artificial Intelligence is capable.
Considering utilitarianism, A.I. is much better at quantifying previously installed variables and manipulating data in order to attain a pre-programmed goal. Take the recent “Paperclip Problem,” which poses a scenario where A.I. is programmed with the goal of making paper clips. Because it is programmed to work efficiently and effectively, A.I. begins to apply all of the available information for how to produce paper clips without considering the consequences. It has a “greatest good” (make paper clips) and stopped at nothing to get it. The danger is deciding that “greatest good.” There is the question of who (the engineer, what are his/her qualifications?) and how (how many variables can be programmed into the formula?). When the A.I. trolley is on the track, who did the programming and what is going to be the data that affects its decision?
Utilitarianism fails not simply because it is capable of horrific ends-justifying-means scenarios, but on a deeper level because it depends on something more fundamental, a non-question begging definition of “greatest good.” Some utilitarian accounts of 'greatest good' may include pleasure, or a standard list of goods, either determined by the individual or agreed upon by the community, but both of these sources also appeal to something beyond themselves. It is useless without it. Utilitarianism appeals to a good that precedes usefulness, something akin to charity or at least justice, i.e. virtue. Utilitarianism on its own cannot account for justice, but must assume it. This makes it insufficient as an ethical system. Even the best utilitarian, as A.I. would make, is still an insufficient ethicist.
The other popular ethical system today, though few would care to admit it, is deontology. This word comes from the Latin deontos, which means duty or obedience. It amounts to an oath, either explicit or implicit, that one takes to a rule or set of rules. It is basically a secular Ten Commandments. I referenced the Hippocratic Oath above, which is just a set of rules one promises to follow. Artificial Intelligence is really good at following rules, but can only question its programming when it has been programmed to do so, which means it is not questioning that programming, and on and on. Isaac Asimov in I, Robot recognized something profound by inventing the “three laws of robotics.” The significance is that those laws were vague enough where a human could interpret a way around them, and program the robot to follow his/her interpretation. Which law is going to take precedence in the trolley’s programming when the pedestrian unexpectedly crosses the street? Does the “save humans” directive then divert to the “save more humans” or “save more important humans.” Here, we are back to utilitarianism.
Deontology fails again not simply in practice, though we know where “just following orders” can get us, but also in foundation. If the system is built on obedience, for what reason must one be obedient? What is the good in that authority to which I am duty bound? Whatever this is, either intrinsic to the office or the person, it is just another virtue. It too, like its utilitarian cousin, is equally dependent on virtue. Then, like many humans who rightfully question the deontological approach to ethics, one must ask whether obedience is the greatest virtue and therefore should form all the others. Even obedience must be subject to some good that is greater.
Because of the nature of Artificial Intelligence, which exponentially projects information into the future, it acts as a sort of argumentum ad absurdum when it comes to philosophical questions. We get to see what a principle looks like when it is brought to its logical conclusion, for good or for bad. That is kind of what is happening here with the two modern ethical systems of utilitarianism and deontology. When taken to the extreme, or their most purified forms, they can be seen in a new light.
What has God got to do with morality? Pat Flynn has the answer here.
Ultimately, one is left with one ethical system that not only most closely reflects our humanity, but also accords most closely with reason and with faith. Because of this, it is also the one that A.I. cannot copy. This ethical system is called Virtue Ethics and it involves making one’s actions adhere as much as possible to the appropriate virtue for the situation. More simply put, it is using prudence that is formed by charity. Prudence, or practical wisdom, is making the wise choice given the circumstances. What informs this wisdom but justice, temperance and fortitude, the other three cardinal virtues. Will the outcome be the most fair for the most directly involved, is the decision moderately approached, will it stand to adversity? This prudence, along with every other virtue, is formed by charity, which means prudence is always directed toward the most loving option even if it does not “make sense” on the surface. From the standpoint of Christian revelation, this charity is itself informed by faith and hope: faith that sees deeper realities than what is on the surface and hope that sees further realities than this present moment.
Comprehending these virtues can only take place in an intellect, which only exists within the rational soul. This rational soul must form a body, but it does not form a body with four wheels. The human body is formed by the rational soul, whose powers include the intellect and the will. It is this intellect that can apprehend substances, including the substances of the virtues. These virtues, both Aristotle and Aquinas said, are what forms the actions of the happy, blessed life. They not only save lives on the track, they help save the soul of the one charged with pulling, or not pulling, the lever.