Actually, as anyone who has ever struggled to put together a Christmas toy or piece of Ikea furniture knows, writing any set of directions can be challenging. Those who must follow such directions have no coach to guide them through the process. The reader doesn’t have access to the writer. The directions, like some poorly written novel, stand on their own, their implications left to the inferences of the reader. Try the shoestring assignment. Write directions for tying a shoestring that would walk a novice through the practice. No verbal help allowed. No visuals. Remember the details: It is one string, not two—one of first mistakes the composition student makes—and the knot is the product of multistage and additive processes.
How could this be a difficult assignment? We tie shoestrings without thinking, as we say. It’s muscle memory. Yes, it is, but that muscle memory came at the end of many failures, and still there’s that occasional episode of tying a shoestring only to have to re-tie because, somehow, the imperfectly tied shoestring un-knots itself as one walks through Ikea.
It’s an example, I suggest, of Moravec’s paradox, the bane of AI researchers, probably best summarized by Steven Pinker. Pinker wrote that in developing AI, software engineers encounter a fundamental truth, that “the hard problems are easy and the easy problems are hard.”* Say what?
Obviously, a computer can do math, really complex math. It can translate whatever language it learns—though in this, idiom and innuendo befuddle the machine. It can run models. It can do much that we do in our frontal cortexes, but very little of what we do deep in the brain, that which we have inherited through millions of years of evolution, that is, all that is necessary for survival. Obviously, our ancient hominin ancestors could survive with smaller frontal cortexes and without the Calculus. That might mean that the Calculus, the burden of math students, wasn’t really necessary for survival outside the historical confines of modern civilization. It is, however, essential for a highly technical civilization and a species that prizes the workings of the frontal cortex: The physics of sending a spacecraft to Mars was not on the minds of our ancient ancestors and neither was the calculation necessary to compensate for Relativity’s effects on satellite-related navigation systems. AI can do the hard stuff, and it can do it more quickly than its inventors. But the easy stuff? Not so much.
Take reading the expressions of a friend or foe as an example.
Reading through a formula is a skill AI can learn and can excel at; reading the facial expression of a person is a skill that a baby can do, but a computer finds difficult in a good liar. In Moravec’s paradox, those skills that took the longest to develop are “second nature” to us, whereas those skills that humans have developed recently, say in the last 100,000 years, are difficult for us, but easy for the computer. Math is new, isn’t it? Only as old as civilization, probably coincident with trade and architecture. Logic, too, is a relatively recent development on this planet. Both math and logic can be programmed. However, reading subtle flirtation (or subtle rejection) at a bar is an old skill, one older than the oldest bar, by the way. Flirtation is, I’m going out on a limb to say, older than beer.
But it isn’t older than beer yeast. That’s the organism that contains Cytochrome-C. Cytochrome-C? Hey, humans, too, have a form of Cytochrome-C, albeit one with a slightly different set of molecules. Cytochrome-C is one of those molecular complexes that reveals our tie to ancient life, and that’s the point of Moravec’s paradox. Humans have been a long time in the making, and along the way have incorporated skills and abilities deeply buried in the brain, so deeply buried as to lie beneath the frontal cortex, and so old as to be “second nature.” Those skills can’t be easily learned even though they are easily practiced. You comprehend innuendo. You comprehend subtle flirtation. Easy for you. Hard for a computer.
I think of dogs in this. Wolves, more so. Or bears. Given a threat, they expose their teeth. One can assume that other mammals, especially other wolves and bears, understand the facial expression of exposed teeth is a sign of danger, of animal “anger” (for want of a better word). And dogs still bare their teeth in a sign of ferocity. But—and here’s the deep brain part—they don’t interpret their owners’ smiles with fully bared teeth as a threat. They know smile from threat even while they retain their wolf heritage of showing teeth during a threat. Teach both interpretations to AI if you can. Easy for us; hard for AI. To repeat Pinker: “the hard problems are easy and the easy problems are hard.”
Moravec calls abstract thinking a veneer on the older brain. Sure, abstraction is a complex process often involving, as it does in algebra or the Calculus, symbols, but AI does it, and does it with apparent ease. But in perception of nuances AI struggles, whereas we don’t. Dogs don’t either.
An AI walks into a bar and sits next to a woman. “So, do you come here often?” the AI asks. She looks at the AI, and says, “Do you come here often?” The AI says, “This is my first visit, but I hope to make it a pattern.” She says, “This is my last time here.”
* Pinker, Steven (September 4, 2007) [1994], The Language Instinct, Perennial Modern Classics, Harper, ISBN 978-0-06-133646-1