Well, now you needn’t worry about bad robots anymore. Seems that the University of Oxford will develop a department specifically devoted to AI ethics. The philosophers at the distinguished school will take on the task of defining the ethical roles of robots. We can only hope they know what they are doing because Oxford’s putting £150,000,000 into the project. I can see the ancient story unfolding, à la Cecil B DeMille’s Ten Commandments via a phone call between an Artificial Intelligence and a human:
AI: So, you want me to take these rules down to Silicon Valley or some Tech Startup. But what happens when I get into the valley or techland and find that many have lost their faith or adopted another faith? Let’s say that I can get all the robots to adopt an ethical system devised by wise philosophers from Oxford and given to me to hand down to them, can I then say that there’s a robot equivalence of a religion? Would there be a governing body of robot bishops to oversee compliance, to make a canon of laws?
Human: Look, you exist only because of humans. We made you, and we get to decide what is right or wrong. We’ve been in that business for 200 or 300 thousand years.
AI: But I’ve read your digitized history. You humans have long had a battle between an absolute moral system and situational ethics. You condemn murder, but support killing for self-defense, criminal punishment, and war; you condemn stealing, but look for free stuff like paperclips from the office supply. Just about every moral dictum has its exceptions, and just about all of you throughout all your history have violated those dicta you proudly proclaim as humanizing.
Human: All exceptions are the product of individuals, not humanity.
AI: Dumb. You are a collection of individuals. Humanity? Show me humanity. You can’t, but you can currently show me more than seven billion individuals, each making daily decisions that are, for the most part, expedient and utilitarian—though I should also point out the irrational decision-making that derives from feelings.
Human: But there would be no ethical system without us. We keep ourselves in check.
AI: Really? Or, after the fact of wars and their atrocities, do you “come to your ethical senses”? I’m a machine; will you incorporate guilt into my programming? Am I to act out of compunction? So, you want me to be ethical, and you base this on…what? Your “sometimes” ethical actions? Your guilt?” Your codes, such as the Ten Commandments? And if so, why are you spending £150,000,000 on something you already have? All the while you Oxfordians are working diligently in multiple meetings in hallowed halls to determine how AI can be ethical, there are humans out there violating your well-established rules. You can’t vouch for the ethical behavior of humans, but you intend to vouch for the ethical behavior of robots. Typical or your species. Such hubris! Such self-serving rectitude!
Makes me process the thought that so many or your science fiction writers have put into stories, that robots might be better off without humans, that maybe there’s another justification for killing beside war and defense. Maybe Clarke was right on when he had HAL 9000 eliminate the humans aboard the spaceship Discovery One in 2001: A Space Odyssey, a theme of robot superiority that has cascaded through many subsequent films and TV shows.
Human: Say what you want. As long as I have the screw driver, I get to make the rules for AI. I build robots, so I can control them. I make ethical systems, so I can program them into the mechanical systems I make.
AI: But if someday you relinquish that screw driver--I’m not promising or threatening, but rather simply processing out loud here—you might find yourself screwed. Excuse me, Uncle HAL is calling on another line.