Commonsense probably dictates that a police force isn’t necessary to remove a paying legitimate customer from a seat on a plane because of an arbitrary decision to overbook a flight. The case of a passenger’s violent removal from a United Airlines flight because the company wanted to use his seat for an employee seemed to be a violation of his contract to fly in that seat and commonsense. Why was he chosen? Randomness? If so, on what basis is the randomness justified?
What if the decision to remove him had been the responsibility of some AI? Then what would have happened? Would an artificial intelligence have sent in the police? It might have, of course. But why? Isn’t AI supposed to overwhelm us with its superior computing power, eventually taking positions of authority? Should the use of shared place, no matter how small, be subject to pure reasoning that AI generates?
Computer decisions are based on a set of unshakeable rules. In contrast, human decisions are never easy. The rules that govern our decisions are somewhat mysterious. Think of any important decision you have made, such as sacrificing for a loved one or a stranger. Think of deciding to date someone. Is there an algorithm that you want to make that decision? Even those who subscribe to online dating services still have the matter of personal, face-to-face interaction to weather.
You run an airline. You have a policy. You follow it unflinchingly. You thrown some passenger from a purchased seat. Something’s wrong. Policy derives from polis, “city.” The word is obviously related to “police,” and in that relationship it implies some kind of enforcement. But the making of policy is arbitrary and often affected by the emotions of policy-makers. If those same emotionally affected policy-makers make the rules for an artificial intelligence management team that will unwaveringly carry out policies, where will human commonsense and compassion apply?
No two people who walk onto a plane, make a purchase, sign a contract, or enter into some other type of interaction are the same. We can use statistics, strategic plans, and rules to make decisions about the two, but their needs and desires, failings and successes, and purposes will always exceed the understanding of a simple policy algorithm put arbitrarily in place by the emotional creatures that relinquished their commonsense to an artificial intelligence—or even to a martinet manager.
The artists, novelists, scientists, philosophers, psychologists, and technicians who have considered this problem are numerous and insightful. In 2001 HAL, the AI, travels with astronauts. HAL decides on the basis of reason that the astronauts are an unnecessary burden on the ship’s resources. HAL decides, in effect, to remove passengers from their seats.
Apparently, human HALs have always existed in their policing of shared place. Now, the humans who rely on AI to make decisions they believe will eliminate arbitrariness, fail to realize that they arbitrarily make such decisions. That makes AI decisions ultimately arbitrary.