Image via Wikipedia
Those of us more favorably inclined to technology often take issue with the technology-bashing that goes on in such films (while otherwise greatly enjoying many of them.) But at least one our friends in the singularity-aware community might actually criticize these movies from the other side: maybe they don’t present a bleak enough picture.
Yes, the singularity is the biggest threat to humanity, warns Michael Anissimov:*
Humans overestimate our robustness. Conditions have to be just right for us to keep living. If AGIs (artificial general intelligences) decided to remove the atmosphere or otherwise alter it to pursue their goals, we would be toast. If temperatures on the surface changed by more than a few dozen degrees up or down, we would be toast. If natural life had to compete with AI-crafted cybernetic organisms, it could destroy the biosphere on which we depend.
Greater than human intelligences might wipe us out in pursuit of their own goals as casually as we add chlorine to a swimming pool, and with as little regard as we have for the billions of resulting deaths. Both the Terminator scenario, wherein they hate us and fight a prolonged war with us, and the Matrix scenario, wherein they keep us around essentially as cattle, are a bit too optimistic. It’s highly unlikely that they would have any use for us or that we could resist such a force even for a brief period of time — just as we have no need for the bacteria in the swimming pool and they wouldn’t have much of a shot against our chlorine assault.
There is no reason to assume that these future intelligences will automatically be “nice” or that they will care about us just because they are intelligent. Greater than human intelligence might, on their own, develop the kinds of ethical standards we have, or standards that we would consider significantly higher than ours. Here’s hoping! But it’s just a hope. Left to their own devices, they are equally likely (if not more likely) to develop ethical standards that operate with no reference to us whatsoever (again, not unlike the way things stand between us and bacteria), that strike us as vastly immoral, or that are completely incomprehensible to us — and that we’ll never get a chance to try to understand.
That’s why, Michael argues, our very existence may well depend on getting it right the first time we produce a greater-than-human intelligence:
Why must we recoil against the notion of a risky superintelligence? Why can’t we see the risk, and confront it by trying to craft goal systems that carry common sense human morality over to AGIs? This is a difficult task, but the likely alternative is extinction. Powerful AGIs will have no automatic reason to be friendly to us! They will be much more likely to be friendly if we program them to care about us, and build them from the start with human-friendliness in mind.
This brings me back around to one of the surprising entries on the list of technophobic movies: Fritz Lang’s silent epic, Metropolis. If you have never seen this movie, or have not seen it in a while, I cannot recommend the newly restored version strongly enough. It is an amazing movie. Lang presents images that are astounding even to our jaded and CGI-addled eyes — using 1927 technology.
Because it deals with a murderous robot and an upper class exploiting a working class, Metropolis is often viewed as a warning about technology or capitalism run amok. However, I think such characterizations have more to do with making the film fit into neat categories than they do with articulating the philosophical message at the heart of the story. That message, which is repeated throughout the film, is that the heart must be the mediator between the mind and the hand. When the heart does not act as a mediator, suffering is the result. The exploitation of the masses, the murderous robot, the destruction and violence that occur in the wake of the workers’ uprising, even a slightly modified version of the Biblical story of the Tower of Babel are all presented as examples of this principle in action.
On the other hand, when the heart steps in and takes its rightful place between the mind and the hand, things begin to improve. What Michael is telling us about the singularity is that it, unlike fantastic goings-on in the movie Metropolis, will not offer a second chance to set things right.
The heart must be in place from the beginning — or we’re in a lot of trouble.
*Of course, Michael also recognizes the singularity as our greatest opportunity. He’s not anti-technology, just anti-wrongheaded assumptions about how technological progress “must” play out.