B: “What?”
A: “Humans dependent upon nonhumans in a substitute world.”
B: “What, again? Are you talking about pets? If so, you have to realize that pets have been around as consolers and companions for millennia.”
A: “No. Machines. Machines as friends. Machines, or rather, AI.”
B: “Where’s this coming from.”
A: “I just read a report on the increased popularity of Xiaolce’s fake friend app.”
B: “Fake friend?”
A: “Yeah. I’m behind the times on this one. Didn’t see its rapid rise. Out of touch with the modern world. I think I’m getting…No, I’m not going to say it.”
B: “Old? Senile? Tell me.”
A: “Okay. Here’s what I know that I didn’t know until recently. There’s an app that substitutes for a friend or counselor. It’s a ‘good listener’ app that some people are using as a stand-in for human companions, an AI that ‘listens’ and responds sympathetically to the lonely, the downtrodden, the friendless. There are about a gajillion Chinese on the app and who knows—well, I guess, the Chinese do—how many others outside China? You can tell the AI your problems. It responds. And people, I guess, have accepted it as a companion, friend, or counselor.”
B: “And you find that disturbing, I guess.”
A: “You can bet your bot I do. Communicating with robots never concerned me until people began adopting them as serious companions. I think of all those old films and TV shows with robot companions that I took lightly. As a kid, I liked that robot in Forbidden Planet and the one in Lost in Space, but those were my days of innocence.”
B: “Why does a pretend friend bother you? There are lots of lonely people out there who might benefit from a companion real for unreal? I can see advantages: People in nursing homes, for example. There’s a HAL out there waiting to befriend you, to play chess with you, to compliment you by complementing you. And look, there’s only a limited amount of practical advice anyone can give anyway. Nothing much new under the Sun, as they say. Advice? Heard that before; heard most, if not all of it, before. Anyway, what’s the harm in a friendly listener?”
A: “As imperfect as human advice is, even the guidance from one trained in counseling, it becomes truly dangerous when it’s a process placed in the control of a few with an agenda. Even without verbalizing advice, an AI can subtly agree with a course of action. I’m envisioning that lonely Chinese woman confiding in an Artificial Intelligence only to be ‘guided’ by an algorithm written by a malevolent human. Maybe worse, by a well-meaning, but foolish technocrat. Where’s the set of ethical checks and balances? Where’s the protection for the vulnerable, for the weak, for the lonely? For the children! We are developing a new set of victims, victims who willingly give away their innermost secrets to a machine whose control lies in the hands of government, Big Tech, or pompous fools who assume they know what’s best for the rest of us. And the influence is growing. Not just millions, but probably soon billions might submit to AI controls—many not even realizing they are under control. It’s like having advertising tricks on steroids, salesmen’s pitches on the downslope of a roller coaster, and the constant chatter of a 24-hour news cycle always in one’s ear—the ideal propaganda machines. I suppose what bothers me most is that people are actually ‘choosing’ to communicate with the app as though it is, in fact, real. Well, I should grant it is, I guess, for them. The AI can mimic a ‘real human.’ And some, I hear, get hooked on the friendship. Scary, I think. We might have millions of people walking around right now living a pretend life. Once removed from reality, can people ever understand the ‘real’ again? How will this affect the young and mentally immature? And what will it do to books, to reading and contemplating, and to introspection?”
B: “Okay. But before you criticize, let me say that I’ve heard you use SIRI.”
A: “Sure, but you never hear me say, ‘SIRI.’ I just start with a question like ‘Where’s the nearest restaurant?’ I don’t personalize SIRI. It’s a disembodied voice and a trigger for a search.”
B: “All right. I am getting the picture. You’re worried that we—if I might suggest—just invented another ‘drug’ to distract us from a ‘real world.’ Allay your concerns, my friend. Allay your fears. Even it it is true, you won’t be able to put a lid on that bottle, so fretting about it is useless. Pandora has seen to that. No sense in trying to stop the unstoppable. For centuries no one has been able to stop the use of chemicals as escape mechanisms or comfort blankets. Just as drugs proliferated, so will AI controls over a population willing to yield its freedom to others. Drugs and AI. There’s no real control on either. To think otherwise is to ignore the size of the human population and the multiple intentions under which it operates. So, AI is proliferating like drugs, maybe faster. Every time we think we have some control some new drug enters the system. I just read that the Vietnamese are about to okay the use of kratom. No doubt that’ll be used more widely. That’s another drug to distribute. And after kratom runs through the population, there will be another drug. Humans appear to hunger after that which ‘dehumanizes’ them in their search for ‘something better,’ an escape of some sort. Lured by the promise of a better personal world where, as the song goes, ‘troubles melt like lemon drops,’ people turn to anything; that used to be drugs, and now it is both drugs and AI. There are just too many people to caution, too many eager for ‘the easy way out,’ and too many far removed from daily realities. For centuries, people took refuge in religions and in political movements. I guess I could as easily argue that religion, politics, and psychoanalysis have long done what Xioalce now does for its millions of users. And I agree with you that like abuses by religious leaders and politicians, abuses arise from the use of AI.”
A: “I guess I’m just frustrated by what I see as humans losing more control over their lives.”