“James Elwood: master programmer. In charge of Mark 502-741, commonly known as Agnes, the world's most advanced electronic computer. Machines are made by men for man's benefit and progress, but when man ceases to control the products of his ingenuity and imagination he not only risks losing the benefit, but he takes a long and unpredictable step into... the Twilight Zone.”
Pretty insightful, huh? Is this where we almost are today with artificial intelligence? About to enter our own real world Twilight Zone? Well, maybe not, and that’s probably good for us. Is there a limitation in AI that might be our single most important defense against it? Can ChatGPT love, or more to the point, become jealous?
In one exchange between Agnes and Elwood, the computer says “Agnes knows best.” But being human and interacting with other humans is never completely machinelike. We can demonstrate this in a single oft-spoken comment: “They’re so different, I can’t see how they ever got together, let alone married.” (Or, if you want, in its antithesis: “They seemed so perfect for each other; yet, they got divorced”)
The algorithms of the human brain are complex enough for us to simultaneously walk over uneven ground while chewing gum, swatting at a bothersome fly, and thinking about a romantic relationship and all the potential and expected responses of the loved one. And that thinking involves our perceptions in the context of the loved one’s absence. We are more than the moment, also. We couple past to future, using memories to frame hope, not simple predictions based on modeling. We respond to hormonal drives by framing them in settings that are fictional accounts of a next encounter we might or might not embody in an actual meeting in a particular setting. Much of our anticipation is fictional; yet, we understand the difference between hope and reality.
Many writers and researchers have thought through the ramifications of a thinking and emoting artificial intelligence. In Seth McFarland’s The Orville, the “robot” Isaac falls in love with Dr. Finn, and in one episode, maybe in imitation of Star Trek’s Data, acquires the ability to emote. Now think of all that goes into your own emotions: Physiological processes meshed with cognition. From your interior gut, through your blood, to your belief system, your response is more complex than any machine, including current AI, can produce regardless of its rapidity. And that's simply because not even you know the entire why of your actions. You might even say that Charles Dickens nailed it—that is, the complexity—when he has Scrooge say the ghost might be nothing more than an “undigested bit of beef.” It’s also manifested in the term hangry. On a whim of biochemical interactions you can pulsate your emotions in different intensities.
Of course, ChatGPT and other artificial intelligence can impress us with rapidity and language. It’s more than the room full of monkeys with typewriters trying to reproduce the works of Shakespeare (a skit once performed by comedian Bob Newhart). ChatGPT can manipulate language highly effectively within the parameters of its algorithms, algorithms, by the way, devised by humans. Even were an AI system to devise its own algorithms, it would not reflect 3.8 billion years of biological evolution—something that you do every moment.
So, I’m skeptical that Agnes will ever know best though it will acquire archetypal models and stereotypical behaviors. Given those last 3.8 billion years and our biological relationship to beer yeast's cytochrome c oxidase, I have my doubts about some inevitable AI takeover. We can, of course, relinquish much of our decision-making to Mark 502-741, an Agnes, even allowing it to choose our mates or hire cooperative and dedicated employees. But deep down, actually, really deep down, in our guts there might some set of bacteria and viruses chaotically assembling an influence on our brains. Some hormone will activate. Some vitamin deficiency will make us respond in unanticipated fashion.
That human complexity makes us say at times, “I didn’t see it coming.” Or, “Those actions seem to be out of character.” AI might never reach the level of melodramatic method acting. It certainly is a long way off from being a Rodion Raskolnikov, impulse driven to enact an evil. And considering that all of us are a combination of chemical and biological processes acting in the context of personal and cultural contexts within belief systems, the prospect of a fully human-like AI seems to be remote at best. Would Mark 502-741 (Agnes) come to the aid of subway passengers threatened by a rogue homeless person? Would Mark 502-741 as a DA charge the good Samaritan with a criminal offense? Or would Mark 502-741 hire or fire on the basis of weight, height, race, religion, or politics?
Agnes might know best, but it will be the best in the context of its limitations. *
*An afterthought for your consideration: https://dnyuz.com/2023/05/16/microsoft-says-new-a-i-shows-signs-of-human-reasoning/