Some futurists such as Ray Kurzweil have argued that within a few decades we will reach a tipping point, where artificial intelligence (AI) in robots and other computer devices will outrun human capability and trigger runaway technological growth. They call this tipping point the technological singularity. This alleged advent of superhuman intelligence will be so incomprehensible to our minds that we cannot fathom how life will be changed, mostly for the better they assume.
The inventor-investor billionaire Elon Musk worries about our future if machines can acquire the power to rebel against their creators. This is a new high-tech twist to the old sci-fi horror of Frankenstein. The UK’s physicist Stephen Hawking and Astronomer Royal Martin Rees have raised similar concerns. But these celebrity prophets of robotic cataclysm are not AI experts.
Responding to such groundless doomsday opinion, leading AI scientist and entrepreneur Rodney Brooks said in a TechCrunch interview in 2017: “For those who do work in AI, we know how hard it is to get anything to actually work through product level.” Brooks is skeptical of both the pessimistic technological prophets and the overly optimistic singularity futurists such as Google chief engineer Ray Kurzweil.
In assessing the future of nasty or nice robotic creations, we ought to pay attention to Rodney Brooks. He is the founding director of MIT’s Computer Science and Artificial Intelligence Lab, and the cofounder of several leading robot manufactures. He has published many papers in computer vision, artificial intelligence, robotics, and artificial life. Furthermore, he is a Founding Fellow of the Association for the Advancement of Artificial Intelligence. For these reasons, and much more, he is credited as being one of the founding fathers of modern robotics.
Brooks published “The Seven Deadly Sins of AI Predictions” where he debunks “mistaken extrapolations, limited imagination, and other common mistakes that distract us from thinking more productively about the future.”
Let’s focus today on just one of the AI predictive sins—the grievous mistake of unreasonable imaginative extrapolation to future technological magic.
Based on research for a chapter in my forthcoming book, I agree with Brooks about the unreasonableness of “imagining magic” powers in future technology by hasty extrapolation from recent AI trends. Brooks refers to the inventor and science fiction writer Arthur C. Clarke’s 1973 technological “third law” that “any sufficiently advanced technology is indistinguishable from magic.” Brooks comments:
“This is a problem we all have with imagined future technology. If it is far enough away from the technology we have and understand today, then we do not know its limitations. And if it becomes indistinguishable from magic, anything one says about it is no longer falsifiable.”
Scientists generally aim for theories that are testable so that they can tell whether any given theory is likely true or likely false. The term “falsifiable” that Brooks uses roughly refers to this testing procedure, which is good for scientific and technological accountability.
“This is a problem I regularly encounter when trying to debate with people about whether we should fear artificial general intelligence, or AGI—the idea that we will build autonomous agents that operate much like beings in the world. I am told that I do not understand how powerful AGI will be. That is not an argument. We have no idea whether it can even exist. I would like it to exist—this has always been my own motivation for working in robotics and AI. But modern-day AGI research is not doing well at all on either being general or supporting an independent entity with an ongoing existence. It mostly seems stuck on the same issues in reasoning and common sense that AI has had problems with for at least 50 years. All the evidence that I see says we have no real idea yet how to build one. Its properties are completely unknown, so rhetorically it quickly becomes magical, powerful without limit.”
But “nothing in the universe is without limit,” he says in rebuke to the unreasonable technological faith of many optimistic and pessimistic futurists. Arguments that assume magical wonders in future technology “can never be refuted. It is a faith-based argument, not a scientific argument,” he insists.
Remember, Brooks is one of the founding fathers of modern robotics. He has dedicated his life to constructing mechanical devices that mimic human beings as much as possible. Now, perched at the peak of an illustrious career, Brooks seriously doubts whether superintelligent machines are even possible. This leaves a gaping hole in Kurzweil’s technological salvation.
The magical Harry Potter stories “are not unreasonable visions of our world as it will exist only a few decades from now” when the singularity arrives, Kurzweil predicted in his book “The Singularity Is Near: When Humans Transcend Biology” (2005), p. 5. Many singularitarians like Kurzweil even claim that humans will achieve immortality and superintelligence when we merge with this future machine intelligence. Yet Brooks reports that “all the evidence that I see says we have no real idea yet how to build” the superintelligent mechanical devices that Kurzweil and his fellow singularitarians imagine (explore this further here).
The properties of our imagined future technological savior “are completely unknown, so rhetorically it quickly becomes magical,” Brooks argues. Rhetoric is the art of persuasion, and sometimes it is deployed in an unreasonable manner. The singularity faith in technological salvation is one such example.
Michael N. Keas is Adjunct Professor of the History and Philosophy of Science at Biola University and a Fellow of the Discovery Institute’s Center for Science & Culture.