When I look throughout Nature to see consequences of social life without “teachers,” I see rogue teenage elephants wantonly destroying or harming, actions that would be minimalized by the presence of a family unit or by a dominant adult elephant now absent because of poachers seeking ivory. When I look at the sometimes rampantly running gangs of inner city youths where family structure and modeling is minimal, I see analogs of parentless rogue elephant teenagers wantonly destroying or harming. Now, are we going to have robots teaching themselves?
That’s a scary sci-fi life I don’t wish on humanity. A world in which Artificial Intelligence determines ethics might not lead to more crimes against humanity, but there’s no guarantee that it won’t. All those scary sci-fi scenarios might play out. Of course, one could argue from the “junk-in-junk-out” perspective, that it’s ultimately a human behind any robot’s makeup, that people will dictate how and when robots will learn through experimentation, that people program. But it’s not beyond the realm of potential reality for robots to design robots or that programs create ever more sophisticated programs. We certainly have many imaginative models in a vast library of science fiction.
Dangers that Artificial Intelligence might foist on mankind might be my fault. Well, not me personally, but me in general, that is humanity. What if we lack the wisdom necessary to keep humans safe in a world of robots? It’s not just death by Uber self-driving car that concerns me. Military robots are already on the threshold of posing a threat.
“But you are a parent, and you were a professor. Certainly, in those roles you did some programming, some setting up for future decisions. There had to be some ‘wisdom’ because your kids and your students seemed to have turned out all right—whatever that means.”
I did, in a sense, do my share of programming, but with limited knowledge, some from what I experienced, some from what my elders said, and some from what I read and thought about. All rather limited in the big scheme of things. Also, the programming went into human brains, all of them capable of tweaking whatever I “programmed” or of rejecting it outright. All were capable of nuanced learning, and all were free to weigh models of thinking and behavior not just through comparisons but also through combinations, often unexpected combinations they derived creatively. Elephants don’t show elephants how to deal with all baobab trees, just with baobab trees in general.
Those I “programmed” had billions of neurons with trillions of connections operating on billions of years of evolution, sometimes deriving what others might term “unlikely” solutions and often acting out unexpected behaviors, and most, if not all, of them guided by some moral compass that is a combination of learning and experience and of cognitive and limbic parts of the brain.
So, the problem of Artificial Intelligence is, at its root, a question of what we put into the system at the outset to avoid getting those wild teen elephants or humans that might learn on their own. The question we might all ask ourselves centers on whether or not we can determine an ethics of decision and action that we might all accept in a population of robots. And the question isn’t easy to answer because of differences in culture. Take members of ISIS, for example. In their destruction of Palmyra, they acted like teens without parents, but they did worse than vandalizing in their torturing, enslaving, and killing other humans. What if they are the model for AI that learns by experimentation? Isn’t that one of those standard sci-fi models? What if the “programmer” is one who believes that a certain group of humans should be eliminated? Or, what if, as in 2001 : A Space Odyssey , a HAL of some kind decides through experimentation that people aren’t necessary? Many sci-fi authors and some ethics philosophers have suggested that such a scenario is possible.
Anyway, consider the following conversation:
“Even if you are an atheist, you probably operate in some manner reminiscent of a fundamental belief held by a number of religions, that is the belief that you are the “image of God” or “made in the image of God,” or even “Godlike.” Your Ego seems to demand such a belief from you, and your inner brain holds onto whatever you perceive to be acceptable attitudes and perspectives that align themselves with that Ego.
“Not so,” you declare humbly. “I’m just an ordinary denizen of Earth, no more and no less important than the other denizens.”
“Wow! Nobly said, but somewhat incredible. So, let me see…hmmnn…Oh! Yes. What about animal rights?”
“Guaranteed. Life on Earth is equal.”
“Bacteria, too?”
“Yes.”
“Really. Sure you just don’t want to limit equality to multicellular life?”
“Okay, I’ll yield on that. Yes. I can see that ‘bad’ bacteria might not fit in my ‘equality’ scheme.”
“So, you’re for ‘rights’ of multicellular animals, right?”
“Well, I’m against hunting, and I don’t eat anything that has a face.”
“Nothing wrong in that, I suppose. So, you’ve already made a decision against the equality of multicellular organisms of some kind, the plants. But let’s say that you place yourself in a home on the boundary of the Everglades and an eight-foot-long alligator emerges and decides you look tasty. Who survives? You or the alligator?”
“Not fair. All life wants to survive. I would run away.”
“No, it’s my hypothetical, not yours. You can’t run in my scenario. I’m giving you a gun.”
“I would do what I had to do to survive. That has nothing to do with rights. I would not allow myself to be eaten. I’ll accede that in such a ‘hypothetical’ I would choose me over the alligator, even if it meant harming or killing the alligator. I might even choose to shoot a human who threatens me under a similar hypothetical scenario.”
“So, it’s a matter of what’s convenient for you. Then, animal rights—and apparently, human rights, also—are negotiable in your view. At least, negotiable in my hypothetical scenario. Rights are a matter of words for you, words of convenience. Would you apply the same argument for rights if you were given the opportunity to program AI?
“But let me go back to animal life, human life in particular, and back to equality in organisms and their right to survive in the context of their evolution and current ecology. Do organisms have a right of history?”
“What’s that?”
“Do I have the right of maintaining a cultural past, such as Palmyra?”
“I guess so.”
“But isn’t this the problem we face when a development company comes in to an established neighborhood and replaces the ‘old’ with the ‘new’? What if AI decides that ‘Newness’ is the value? And what if AI decides that you, as the science fiction stories about the subject sometimes suggest, are an old and chronic infection, an unnecessary blight on an otherwise gleaming ‘Newness’? What if AI decides that all ethics have only one fundamental principle, such as Newness or Efficiency? Then no history is safe, not the individual’s and not the group’s.”
“?”
“Here we are in the twenty-first century and you (the Collective You) are the creating God whose creation is AI. Do you create in your image? If so, is what you understand as your own image also the image of other life, particularly other human life? This problem of being made in the image of God is not just a religious one. It seems to have practical consequences in an age when humanoids (robots, androids, AI) are currently under development and increasing use. The sci-fi writers have been insightful in running out the gamut of possible futures, including the HALs that might lie in our future, or the wild robots that learn from their own experimentation and turn out to be the next generation of undisciplined, un-tutored teenagers with no adult supervision?
“People might have dismissed Stephen Hawking’s warning about AI, thinking he was a hypocrite since AI enabled him to talk. But as ‘they’ say, “Out of the mouth of AI….” ***
*AFP report on Conference on Intelligent Robots held in Madrid, October 6, 2018 online at https://www.france24.com/en/20181006-increasingly-human-like-robots-spark-fascination-fear
**Cellan-Jones, Rory, Tech correspondent, Stephen Hawking warns artificial intelligence could end mankind, BBC Technology, December 2, 2014. Online at https://www.bbc.com/news/technology-30290540
***There have been books published this and related subjects. Speaking Minds, Interviews with Twenty Eminent Cognitive Scientists , a collection of essays, is worth a perusal. (Baumgartner, Peter and Sabine Payr, Eds. Princeton University Press, 1995) I will comment on some of those eminent cognitive scientists’ thoughts in some future postings.