Glenn Reynolds, in a quick review of why we appear to be closing fast on the Singularity, concludes with this intriguing thought:
The future is almost here, but we’ve still got some choices about how things will turn out. Let’s try to choose wisely.
In context, Glenn is referring to the when and where of the Singularity. Will it occur (or rather, will it start) in the US or in China? Or someplace else? Interesting questions.
Does it matter where the Singularity begins?
Possibly, if only because the place of origin might have something to do with what kind of Singularity occurs. Although we tend to talk about the Singularity as a kind of monolithic proposition, there are a number of scenarios that embrace an eventual technological singularity, and not all of them are particularly nice. In fact, some are pretty horrible.
One of the scariest is the grey goo scenario. It isn’t generally classified as a “technological singularity” per se, but of course it is that. In the grey goo scenario, self-replicating nanomachines get carried away with their ability to reproduce and end up deconstructing the entire world (or universe, depending on the flavor of the scenario) into a pile of — well, self-replicating nanomachines. From our macroscale vantage point, the world would then look like nothing but a mass of gray goo. But of course there would be no one around to see it from our vantage point, beacause everyone and everything would be taken apart to make more randy little robots.
The good news is that grey goo looks increasingly less likely as models for implementing nanotechnology mature. But it’s far from the only scenario. Glenn quotes from the Vernor Vinge essay that introduced the idea of the Singularity:
If networking is widespread enough (into ubiquitous embedded systems), it may seem as if our artifacts as a whole had suddenly wakened.
Which raises a very important question: What kind of mood will the machines be in when they wake up? How will they view us? How will they be disposed towards us?
Hollywood has had no shortage of answers to those questions. In The Terminator and its sequels, the newly awakened machines view humans as rivals (or pests) that need to be eliminated. In the Matrix films, the machines treat us more like cattle — useful, but ultimately disposable.
In the real world, the “mood” of the awakened machines will come down to two things:
-
The human values and ethical systems with which they are innately imbued (if any).
-
The values and ethical systems that they develop independently, using their own reason and their own unique perspective.
Ultimately, we have to hope that the second item looks like a new and improved version of the best of the best from the first item. We may be able to help the machines in that direction by programming them to be nice in the first place. But it’s far from a being a given that they will be nice. What if we develop computers with human-level (or greater) intelligence before we figure out how to make computers nice? Eliezer Yudkowsky sets out this dilemma in rather bleak terms, somewhat reminiscent of Bill Joy at his most buzzkillish:
Moore’s Law does make it easier to develop AI without understanding what you’re doing, but that’s not a good thing. Moore’s Law gradually lowers the difficulty of building AI, but it doesn’t make Friendly AI any easier. Friendly AI has nothing to do with hardware; it is a question of understanding. Once you have just enough computing power that someone can build AI if they know exactly what they’re doing, Moore’s Law is no longer your friend. Moore’s Law is slowly weakening the shield that prevents us from messing around with AI before we really understand intelligence. Eventually that barrier will go down, and if we haven’t mastered the art of Friendly AI by that time, we’re in very serious trouble. Moore’s Law is the countdown and it is ticking away. Moore’s Law is the enemy.
Um, yikes.
All of which leads us back to those “few remaining choices” that Glenn referred to. The fact that there aren’t many choices to be made does not diminish their significance. On the contrary. This is something that we have to get smart about now, and that we have to get right the first time. The Singularity isn’t likely to lend itself to a do-over.
But if we do manange to get it right, to make machines that are nice — or if we fail to do it, but the machines figure out how to be nice on their own, or if it turns out that “niceness” is ultimately a function of intelligence and is bound to win out in a superintelligent system — then the choices that we make now will pay off in ways that we can’t really even imagine. Which is why they call it a Singularity, after all.
Suffice it to say that the true age of choice — the age in which each one of us is able to choose, at the most fundamental level, who we are, what our lives are, what the world is — that age begins after the Singularity; that is, the right kind of Singularity. The stakes couldn’t possibly be higher. So let us “choose wisely,” indeed.