I asked that question to some experts in the field of anger management at the National Anger Management Conference in Washington, D.C. just four days before Hutson’s article appeared. Not that I was expecting a detailed answer from those accomplished experts. Writing software for a robot would consume time more valuably spent in helping angry humans with their uncontrolled emotions. But I asked the question in light of my own desire to understand human emotion—any human emotion.** The key to making a robot humanlike is to incorporate whatever we deem to be human into its programming. We know already that we can incorporate language. For example, my computer catches me when I misspell and tries to correct my intentional conversational fragments.***
Science fiction has addressed the issue of robots and human nature in many stories. Take Star Trek: The Next Generation, for example. Data is a robot (android) in search of its (his) inner human. The first Star Trek series also addressed the issue on more than one occasion, and the first Star Trek movie ended with a merger of human and machine. Remember, too, that famous computer in 2001: A Space Odyssey. HAL is highly rational but lacking in feeling for the dignity and value of the humans on the spaceship whom he attempts to kill.
It’s difficult to humanize a machine, and when writers and directors do humanize one, the product is often more likely to put real humans in jeopardy, the way Blade Runner’s replicant Roy Batty nearly kills Decker or the cyborg in Terminator almost kills Sarah. Wasn’t Artificial Intelligence one of the dangers that the late Stephen Hawking said we faced?
If we can answer the question I asked about making an angry robot, we might learn more about what it is that we are. Anger can be an intense emotion. But as experts acknowledge, it manifests itself in a variety of expressions and levels of intensity that change with person, place, and circumstance. Through the evolution of mammals those variations of emotions are programmed into humans. Would robotic evolution follow a similar path? Or, have we started robotic development from where we now stand with somewhat inadequate knowledge of what we are? Note that there are numerous versions of philosophy and psychology and that those versions have changed.
How do we program variations into robots? The giant robot Gort in the original film The Day the Earth Stood Still had absolute power to eliminate any threat, even if that threat comes from a species that initially granted such power to Gort and its ilk. As emotionless guardians of the galaxy Gort’s kind would have no qualms about killing.
If we do program an angry robot, can we also include in the programming the potential to manage that anger with the help of an anger management specialist? Could a specialist apply psychological techniques to calm a raging robot?
Making an angry robot—or a loving one full of empathy—is the ultimate accomplishment in AI design. True, an angry robot would be dangerous. We already have a good idea of what anger can do in and to our species. But in designing an angry robot, we would have to know anger holistically. We would have to understand it, not just in terms of twenty-first century social constructs and psychological analyses, but rather as a human trait traceable from some Cain in a cave to some astronaut in a spacecraft on the way to Proxima Centauri.
The current status of AI is interesting. Try typing ai into a Word document. You’ll get an underline that your computer will use to indicate you made a mistake; AI, not ai, is the proper way to address the entity. Now type human. The computer doesn’t seem to care whether or not you use the lower case unless you use the word to begin a sentence.
So, as AI evolves, will it care? If it does care, if it underlines human because it expects a capital “H” out of respect, will it also empathize? Will it get angry if we write “ai”? Will it get help to manage its anger? Oh! And one more question: Will an angry robot seek emotional help from a human anger management specialist or a robotic one?
* http://www.sciencemag.org/news/2018/04/could-artificial-intelligence-get-depressed-and-have-hallucinations
**Saying I want to understand emotion might raise an eyebrow. Understanding. Emotion. One implies rationality; one doesn’t. Can we understand emotions? Or, do we simply apply logical constructs that satisfy the outer brain while frustrating the inner one? Psychologists, counselors, and psychotherapists might note that they have a handle on emotions, and they might refer me to their manual on disorders to say how emotions play their roles. Or, they might point out behavioral manifestations of emotions. Or, they might refer me to studies on compassion and empathy. “There,” they might say, “that’s what that emotion ‘means’.” So, for a layperson like me, understanding might not be possible. Love? Well, why can’t we express it in some language other than “I love you”? Everyone says that love is a “powerful” emotion; yet, it appears to be limited to a three-word expression. Anger? “I’m very angry. I’m very upset.”
Do we “understand” emotions when we connect with another emotionally? Is it possible that I just don’t understand the word understand?
***My computer does not, however, express concern that I often write sentence fragments. Can you imagine: “Dummy, how many times do I have to correct your grammar? How many times will you write a sentence fragment? I’m getting tired of this.”