In a recent article, I began to unpack Rodney Brooks’ October 2017 essay “The Seven Deadly Sins of AI Predictions.” Now I continue my analysis by looking into the faulty atheistic thinking that motivates the AI salvation preached by futurists such as Google’s Ray Kurzweil. Although Brooks does not address this worldview dimension, his critique of AI predictive sins provides a great opportunity for just that.
Brooks is a pioneer of robotic artificial intelligence (AI) and is MIT Panasonic Professor of Robotics Emeritus. He is also the founder and chief technology officer of Rethink Robotics, which makes cobots—robots designed to collaborate with humans in a shared industrial workspace.
Previously I discussed Brooks’ remark that “all the evidence that I see says we have no real idea yet how to build” the superintelligent devices that Kurzweil and like-minded singularity advocates imagine.
Singularity believers claim that soon we will reach a tipping point in which AI devices will begin an exponential increase in capabilities that exceed human intelligence. AI salvation would occur by uploading human minds to the newly awakened world of computer consciousness, or so the story goes.
What’s wrong with the reasoning behind this AI-tech salvation?
Brooks identifies the problem of “suitcase words.” Marvin Minsky, one of the founding fathers of artificial intelligence, used the term “suitcase words” to identify words that have an unusually large variety of meanings. “Learning” is one such word, as in the AI term “machine learning.”
“When people hear that machine learning is making great strides in some new domain, they tend to use as a mental model the way in which a person would learn that new domain. However, machine learning is very brittle, and it requires lots of preparation by human researchers or engineers, special-purpose coding, special-purpose sets of training data, and a custom learning structure for each new problem domain. Today’s machine learning is not at all the sponge-like learning that humans engage in, making rapid progress in a new domain without having to be surgically altered or purpose-built.”
Irresponsible suitcase word use is the logical fallacy of equivocation. Many attributes of human learning are assumed to be true of machine learning without justification. Machines do not “learn” in the same profound sense that we do. Naturalistic assumptions make this illogical equivocation less noticeable. How so?
According to the atheistic naturalism, humans are merely complicated arrangements of material parts, cobbled together by unguided evolution, out of which consciousness and learning have arisen. If one assumes this naturalistic story, then conscious learning has arisen already at least once in the history of the cosmos (in advanced animals such as humans), so why not a second time (the technological singularity)?
AI scientist and entrepreneur Erik Larson has probed deeper into the disconnect between machine computational “learning” and true human learning. Much of human learning goes far beyond generalizations made by gathering and computationally processing data. Larson writes:
“In fact, there is no computational framework for epistemologically complex learning at all—if we could do that, we could solve the frame problem. AI … has made progress only towards problems that admit of computational representation and processing, ones that importantly presuppose a particular simplistic set of epistemological conditions.”
True human learning is “epistemologically complex” (cognitively rich, and much more), as Larson notes. When machines “learn” they are only mimicking (in some weak sense) a tiny component of human learning, and there is no evidence that computers are even doing that consciously, or ever will. This connects us with another important point in Brooks’ essay—the difference between performance and competence.
Computers perform narrow tasks more rapidly than humans, but they lack competence. Brooks explains:
“Now suppose a person tells us that a particular photo shows people playing Frisbee in the park. We naturally assume that this person can answer questions like What is the shape of a Frisbee? Roughly how far can a person throw a Frisbee? Can a person eat a Frisbee? Roughly how many people play Frisbee at once? Can a three-month-old person play Frisbee? Is today’s weather suitable for playing Frisbee?
“Computers that can label images like “people playing Frisbee in a park” have no chance of answering those questions. Besides the fact that they can only label more images and cannot answer questions at all, they have no idea what a person is, that parks are usually outside, that people have ages, that weather is anything more than how it makes a photo look, etc.”
Image recognition software empowers me to find Frisbee-playing pictures that help convey particular ideas in my slideshow. Google has programmed an AI search engine that performs this task well. But AI devices are incapable of conceiving ideas to communicate in slideshows. They quickly perform narrow tasks, but lack competence. But this distinction is often missed, especially by people whose naturalistic assumptions blur the boundaries between machine performance and conscious intelligent competence.
So what’s wrong with the godless AI salvation story that says humans will merge with machine learning and live virtually forever? Machines don’t really learn in a manner that involves true competence (explore more here). This naturalistic myth is based upon fallacious suitcase equivocation and the (relative) incompetence of some humans who should know better.
Michael N. Keas is Adjunct Professor of the History and Philosophy of Science at Biola University and a Fellow of the Discovery Institute’s Center for Science & Culture.