“Some futurists and science fiction writers predict that we’re on the cusp of a world-changing “Technological Singularity.” Skeptics say there will be no such thing. Today, I’ll be debating author Ramez Naam about which side is right.
A recent article by Erik Sofge in Popular Science really got my hackles up. Sofge argued that the Singularity is nothing more than a science fiction-infused faith-based initiative — a claim I take great exception to given (1) the predictive power and importance of speculative fiction, and (2) the very real possibility of our technologies escaping the confines of our comprehension and control. In his article, Sofge described futurist Ramez Naam as a “Singularity skeptic,” which prompted me to contact him and have a debate. Here’s how our conversation unfolded.
George: You were recently described by Sofge as a “Singularity skeptic,” which for me came as a bit of surprise given your amazing track record as a futurist and scifi novelist. You’ve speculated about such things as interconnected hive-minds and the uploading of human conscious to a computer — but you draw the line, it would seem, at super artificial intelligence (SAI). Now, I’m absolutely convinced that we’ll eventually develop a machine superintelligence with capacities that exceed our own by an order of magnitude — leading to the Technological Singularity, or so-called Intelligence Explosion (my preferred term). But if I understand your objections correctly, you’re suggesting that the pending preponderance of highly specialized generalized AI will never amount to a superintelligence — and that our AI track record to date proves this. I think it’s important that you clarify and elaborate upon this, not least because you’re denying something that many well-respectedthinkers and AI theorists describe as an existential risk. Also, I’m also hoping you can provide your own definition of the Singularity just to ensure that we’re talking about the same thing.
Mez: Hey, George. Great to be in dialogue. To be clear, I 100% believe that it’s possible, in principle, to create smarter-than-human machine intelligence, either by designing AIs or by uploading human minds to computers. And you’re right, I do talk about some of this in Nexusand my other novels. That said, I think it’s tremendously harder and further away than the most enthusiastic proponents currently believe. I talked about this at Charlie Stross’s blog, in a piece called “The Singularity is Further Than it Appears.” Most work in AI has nothing to do with building minds capable of general reasoning. And uploading human brains still has huge unknowns.
My other issue is with the word ‘Singularity’. You asked me to define it. Well…”