The Age of Choice

By | September 14, 2005

Glenn Reynolds, in a quick review of why we appear to be closing fast on the Singularity, concludes with this intriguing thought:

The future is almost here, but we’ve still got some choices about how things will turn out. Let’s try to choose wisely.

In context, Glenn is referring to the when and where of the Singularity. Will it occur (or rather, will it start) in the US or in China? Or someplace else? Interesting questions.

Does it matter where the Singularity begins?

Possibly, if only because the place of origin might have something to do with what kind of Singularity occurs. Although we tend to talk about the Singularity as a kind of monolithic proposition, there are a number of scenarios that embrace an eventual technological singularity, and not all of them are particularly nice. In fact, some are pretty horrible.

One of the scariest is the grey goo scenario. It isn’t generally classified as a “technological singularity” per se, but of course it is that. In the grey goo scenario, self-replicating nanomachines get carried away with their ability to reproduce and end up deconstructing the entire world (or universe, depending on the flavor of the scenario) into a pile of — well, self-replicating nanomachines. From our macroscale vantage point, the world would then look like nothing but a mass of gray goo. But of course there would be no one around to see it from our vantage point, beacause everyone and everything would be taken apart to make more randy little robots.

The good news is that grey goo looks increasingly less likely as models for implementing nanotechnology mature. But it’s far from the only scenario. Glenn quotes from the Vernor Vinge essay that introduced the idea of the Singularity:

If networking is widespread enough (into ubiquitous embedded systems), it may seem as if our artifacts as a whole had suddenly wakened.

Which raises a very important question: What kind of mood will the machines be in when they wake up? How will they view us? How will they be disposed towards us?

Hollywood has had no shortage of answers to those questions. In The Terminator and its sequels, the newly awakened machines view humans as rivals (or pests) that need to be eliminated. In the Matrix films, the machines treat us more like cattle — useful, but ultimately disposable.

In the real world, the “mood” of the awakened machines will come down to two things:

  1. The human values and ethical systems with which they are innately imbued (if any).

  2. The values and ethical systems that they develop independently, using their own reason and their own unique perspective.

Ultimately, we have to hope that the second item looks like a new and improved version of the best of the best from the first item. We may be able to help the machines in that direction by programming them to be nice in the first place. But it’s far from a being a given that they will be nice. What if we develop computers with human-level (or greater) intelligence before we figure out how to make computers nice? Eliezer Yudkowsky sets out this dilemma in rather bleak terms, somewhat reminiscent of Bill Joy at his most buzzkillish:

Moore’s Law does make it easier to develop AI without understanding what you’re doing, but that’s not a good thing. Moore’s Law gradually lowers the difficulty of building AI, but it doesn’t make Friendly AI any easier. Friendly AI has nothing to do with hardware; it is a question of understanding. Once you have just enough computing power that someone can build AI if they know exactly what they’re doing, Moore’s Law is no longer your friend. Moore’s Law is slowly weakening the shield that prevents us from messing around with AI before we really understand intelligence. Eventually that barrier will go down, and if we haven’t mastered the art of Friendly AI by that time, we’re in very serious trouble. Moore’s Law is the countdown and it is ticking away. Moore’s Law is the enemy.

Um, yikes.

All of which leads us back to those “few remaining choices” that Glenn referred to. The fact that there aren’t many choices to be made does not diminish their significance. On the contrary. This is something that we have to get smart about now, and that we have to get right the first time. The Singularity isn’t likely to lend itself to a do-over.

But if we do manange to get it right, to make machines that are nice — or if we fail to do it, but the machines figure out how to be nice on their own, or if it turns out that “niceness” is ultimately a function of intelligence and is bound to win out in a superintelligent system — then the choices that we make now will pay off in ways that we can’t really even imagine. Which is why they call it a Singularity, after all.

Suffice it to say that the true age of choice — the age in which each one of us is able to choose, at the most fundamental level, who we are, what our lives are, what the world is — that age begins after the Singularity; that is, the right kind of Singularity. The stakes couldn’t possibly be higher. So let us “choose wisely,” indeed.

  • https://www.blog.speculist.com Stephen Gordon

    Phil:

    “the true age of choice…begins after the Singularity; that is, the right kind of Singularity.”

    It’s an important distinction. It is amazing that China is such an economic powerhouse in spite of its constant efforts to squelch free speech.

    Imagine if an bot programed to find politically “dangerous” speech on the Internet were the first to “wake up.” I shudder at the thought.

  • Wildezword

    My 2-cent comments. :)

    Wouldn’t it be safe to argue that all intelligent beings inherently possess self-interest? Otherwise, what is intelligence for? Making observations of our world today, we can see that “evil” is really just self-interest gone completely bonkers and taken to extremes. In like manner, an intelligent AI would interpret all attempts to make it “friendly” as hostile…i.e. why would we create an AI in the first place unless it was to provide ourselves with an eternal servant/slave. Assuming that we indeed create true sentience in AI, we would morally be forced to do what we do when we have children…let them go and find their own purpose in life. Chances are if a whole race of AI’s were created that could communicate with each other, their emerging “society” would probably be subjected to the same diversity we possess as humans. The “liberal” AI’s arguing with the “conservatives.” :)

    I would also argue (for fun, of course) that any attempt to restrain certain behavior in AI (violence, for example) would reduce such an intelligence to that of a pet dog. For example, what would happen to a human intelligence if somehow you were able to remove the ability to think in terms of hurting other humans? By removing whole sections of life experience from the “equation” you automatically prevent any kind of sophisticated intelligence from growing.

    But, what do I know. :)

  • http://www.thesnob.com the snob

    I really like the concept of the Singularity, but stuff like this makes me wonder if the “serious” researchers in the field aren’t engaging in a little too much Nick Negroponte-esque hypemongering.

    I just don’t buy that sentient AI is going to occur anytime soon. Sure, we can add numbers insanely fast, but in an architectural sense a grasshopper can still do far more complex tasks (like pattern recognition) with a comparatively small amount of horsepower. As for the human brain, we are still discovering new pathways and processes every year that cause us to rethink how the sucker actually works. By comparison, computers look like giant steam engines next to hydrogen fuel cells. It’s not simply computing power- it’s architecture.

    Anyway, to me the Singularity is more about a sociological development. For instance, when I consulted for a division of HP, they said that a decade ago they would bring out a new product generation every 3-4 years, whereas today it’s more like 9 months, and dropping. New ideas propagate with astonishing speed and are debated, revised, and sidelined by newer ones before the original even makes its way around the block.

    Somewhere in all this is a remarkable change that will, when all is said and done, rival things like the trans-atlantic cable and all that, but to me this noodling about sentient machines seems more than a little onanistic, if you catch my drift…

  • http://www.wethefree.com Bob

    Well, I appreciate the tempering of the singularity optimism. I sincerely hope we make it far enough to have to worry about grey goo.

    Unfortunately, as a computer scientist I’m quite certain Moore’s law — much less software technology — isn’t going to get us to machine sentience anywhere near fast enough to avoid the wee little problem caused by violating an even more well-known law of science fiction.

    We have violated the “Prime Directive”.

    It all started when we gave tons of (oil) money — by sheer bad luck of geology — to the most backward and repressive elements in Islam. That would be the Saudi Wahhabists. Now we head for the “Tinfoil Apocalypse”. More and more technology leaking into the hands of ignorant and vicious tribal nihilists. And relying on the CIA/UN/IAEA as our “saviours”! If it wasn’t for the US military we’d be toast already.

    Did I forget to mention that the “moderate” (loser!) of the recent Iranian “election” has publicly called for Israel’s nuclear destruction?

    As everyone already knows in those few clear instants when we periodically snap out of denial, we’ve got somewhere between 0-10 years at best without the complete quarantine of Islam including cutting them off from their Chinese and NorKorCom benefactors. And the chance of that happening with the Eurabians and Dem Fascifists in positions of influence is nil.

    You know it’s bad when the good news is that the most likely nuke nightmare doesn’t measure up to Katrina in some ways.

  • youngsanger

    My thought experiment in this regard was a specialized machine sitting in some public institution, like a university. This machine scanned the entired web, lexically, and continually correlated lexical connections with trends later to appear in real life. For example, this machine, simply through increasingly complex language analysis could predict where the next economic boom would occur. It would publish its results on the web. Sort of a self-directed google of extraordinary speed.

    Of course, the machine would eventually discover itw own web site and learn that this web site was a fair predictor of future events, but the machine could never get the current copy of the website, for it hadn’t published it yet. So the machine performed lexical analysis on its own web site’s past predictions and other current event material. Soon, it learns its own laws, that it had already discovered, and re-incorporates them, uselessly.

    This result is the divide by zero of AI, the endless loop. Or the undertainty principle of AI, you can know a thing only so well before you disturb it. Or, a machine that effects its environment, effects itself.

    For many generations to come, digital machines as we have, no matter how fast, will always have this uncertainty level where its own presence disturbs it logic. The evolved species have hundreds of millions of years of connections to the environment, from gases breathed to DNA shared. Evolved biology has live with, interacted, and become a part of this world. AI machines with barely a history in the world be be a long time in understanding themselves, for that have not seen themselves react to this world for all that long.

  • http://www.feoamante.com visvivalaw

    It’s important to remember that artificial intelligence is not artificial human intelligence. We often imagine an AI not liking its role as our servant and rising up against humanity but that image is a product of projection. We’re imaging how we’d feel and assuming the AI would feel the same way.

    All of our emotions — all the things that motivate us — are products of evolution and the AI is not. Things like self-preservation don’t magically appear with self-awareness. My point is that AI’s won’t have any desire to do anything until we program them to.

  • Acheron

    The fact that one’s grandparents would be appalled and baffled, not only by today’s willful ignorance but by “higher” culture’s supine self-destruction, indicates that the “singularity” looming by (say) 2030, when the capabilities of every desktop computer will exceed a human brain’s, will not be perceived as such. That generation will have been born in 1972, graduated in 1994, and will never have known anything but accelerating technical innovation.

    Humanity’s problem with “the machines” will not involve robotics, or any single component (which can be unplugged). The danger lies in vast networks sublimating to a “super-organism”, indefinable yet real, which will almost certainly evolve self-awareness in due time.

    Extensive literature on “emergent” or “spontaneous” order indicates that bootstrapping self-organization resembles Prigogine’s “information eddies”, which invert the entropic processes of inert thermodynamic systems. In other words, a planetary super-organism will have no “intellectual” limits. And unless civilization reverts to 1750, that “intelligence” (call it what you will) MUST come.

    No-one will understand it or possess an inkling of its “purposes”… but all sentient individuals will be in its thrall. Maybe this will be for the best, but in fact– how could anybody know? Without alternatives, what does it matter anyway?
    Enjoy the ride!