Bleak Outlook, but not Bleak Enough?

By | January 21, 2011
UFA-poster for Metropolis

Image via Wikipedia

College Crunch presents the top 10 Most Technophobic Movies, with some completey expected entries, some fairly obscure material, and a least a couple surprises.

Those of us more favorably inclined to technology often take issue with the technology-bashing that goes on in such films (while otherwise greatly enjoying many of them.) But at least one our friends in the singularity-aware community might actually criticize these movies from the other side: maybe they don’t present a bleak enough picture.

Yes, the singularity is the biggest threat to humanity, warns Michael Anissimov:*

Humans overestimate our robustness. Conditions have to be just right for us to keep living. If AGIs (artificial general intelligences) decided to remove the atmosphere or otherwise alter it to pursue their goals, we would be toast. If temperatures on the surface changed by more than a few dozen degrees up or down, we would be toast. If natural life had to compete with AI-crafted cybernetic organisms, it could destroy the biosphere on which we depend.

Greater than human intelligences might wipe us out in pursuit of their own goals as casually as we add chlorine to a swimming pool, and with as little regard as we have for the billions of resulting deaths. Both the Terminator scenario, wherein they hate us and fight a prolonged war with us, and the Matrix scenario, wherein they keep us around essentially as cattle, are a bit too optimistic. It’s highly unlikely that they would have any use for us or that we could resist such a force even for a brief period of time — just as we have no need for the bacteria in the swimming pool and they wouldn’t have much of a shot against our chlorine assault.

There is no reason to assume that these future intelligences will automatically be “nice” or that they will care about us just because they are intelligent. Greater than human intelligence might, on their own, develop the kinds of ethical standards we have, or standards that we would consider significantly higher than ours. Here’s hoping! But it’s just a hope. Left to their own devices, they are equally likely (if not more likely) to develop ethical standards that operate with no reference to us whatsoever (again, not unlike the way things stand between us and bacteria), that strike us as vastly immoral, or that are completely incomprehensible to us — and that we’ll never get a chance to try to understand.

That’s why, Michael argues, our very existence may well depend on getting it right the first time we produce a greater-than-human intelligence:

Why must we recoil against the notion of a risky superintelligence? Why can’t we see the risk, and confront it by trying to craft goal systems that carry common sense human morality over to AGIs? This is a difficult task, but the likely alternative is extinction. Powerful AGIs will have no automatic reason to be friendly to us! They will be much more likely to be friendly if we program them to care about us, and build them from the start with human-friendliness in mind.

This brings me back around to one of the surprising entries on the list of technophobic movies: Fritz Lang’s silent epic, Metropolis. If you have never seen this movie, or have not seen it in a while, I cannot recommend the newly restored version strongly enough. It is an amazing movie. Lang presents images that are astounding even to our jaded and CGI-addled eyes — using 1927 technology.

Because it deals with a murderous robot and an upper class exploiting a working class, Metropolis is often viewed as a warning about technology or capitalism run amok. However, I think such characterizations have more to do with making the film fit into neat categories than they do with articulating the philosophical message at the heart of the story. That message, which is repeated throughout the film, is that the heart must be the mediator between the mind and the hand. When the heart does not act as a mediator, suffering is the result. The exploitation of the masses, the murderous robot, the destruction and violence that occur in the wake of the workers’ uprising, even a slightly modified version of the Biblical story of the Tower of Babel are all presented as examples of this principle in action.

On the other hand, when the heart steps in and takes its rightful place between the mind and the hand, things begin to improve. What Michael is telling us about the singularity is that it, unlike fantastic goings-on in the movie Metropolis, will not offer a second chance to set things right.

The heart must be in place from the beginning — or we’re in a lot of trouble.

  

*Of course, Michael also recognizes the singularity as our greatest opportunity. He’s not anti-technology, just anti-wrongheaded assumptions about how technological progress “must” play out.

Enhanced by Zemanta
  • faria.jeff

    “Why can’t we… carry common sense human morality over to AGIs?”

    Uh… have you ever taken a good, long, honest look at what passes for human morality? Do you REALLY want a superintelligence to do unto us as we do to ourselves?

    Really?

    You know what’s really sad (yet kind of stupefyingly amazing) here: It’s that you, and folks like you, actually think that’s some sort of answer, instead of what it really is – the problem. Last thing in the world you should want is a superpowerful machine employing (shudder) human moral values.

    We’d be wiped out in a day.

  • theradicalmoderate

    Let’s say you could define a set of human-friendly values for your AI. How do you ensure that they are implemented correctly in every single AI that has the autonomous ability to consume resources? You can’t. Eventually, either maliciousness, carelessness, or a simple bug is going to produce something nasty. Now, maybe you can design an enforcement regime that kills the thing before it gets too big for its britches, but the odds of something pretty bad happening approach unity given enough time.

    I suspect that the only truly durable answer to this problem is along the lines of, “If you can’t beat ‘em, join ‘em.” If the entities formerly known as humans can compete and trade with their engineered progeny, then we may be gone only to the extent of being enhanced beyond recognition.

  • M. Simon

    Asimov’s Three Laws are a start. The deal is: morality is not logical and involves making compromises between competing moral rules whose weight varies with time and circumstance.

    Not even humans do it well.

  • Veeshir

    James Hogan wrote a book titled, “The Two Faces of Tomorrow”.

    They’re about to turn on Skynet but they’re afraid, so they build an orbiting colony and test it there.

    It’s good book. He’s hit and miss, but that’s one of his best. (Code of the Lifemaker is probably his best)

  • Phil

    I was going to comment in response to Faria.Jeff’s comment, but have decided to create an update to the original post instead, see above.

  • faria.jeff

    “I’m not sure how his assessment of the argument moves us forward”
    Didn’t know I was obliged to solve the problem. Sometimes it’s sufficient contribution to point out that the problem really hasn’t been solved.

    As it happens, I’m writing a book in which this theme plays a part. So it’s on my mind. (And I’ve marked this page as reference – it may be a useful resource to me, I think. Thank you for that.)

    “I’m flattered at the thought that my post is both stupefying and amazing”
    Didn’t say your post is both stupefying and amazing. I referred very specifically to the facile notion that “human morals” were a worthwhile solution. That concept is hardly confined to this post, for that matter – I’ve seen it before.

    I am sorry, by the way, that you may have taken what I said somewhat personally. Didn’t mean it that way, genuinely didn’t mean to insult you, but I can see how it could be taken that way. As I say, it’s a common idea, this idea of ‘human morals’. Likewise, the idea of ‘common sense’, which theoretically bubbles up from the ground like a spring. Unfortunately, both are illusions. Myths.

    Whose common sense, and which human morals, shall we imprint on the future race of machines, to protect us from them? The Nazis? Obviously not, you exclaim! But the Nazis, in their time, would argue vociferously that they not only had morals, but that those morals were superior in nature. (Despite the way members of the Nazi party are portrayed in popular culture, in their daily lives they loved their children, went to museums, walked their dogs and attended church. Just like anybody else. Of course, they also had their internally-justified beliefs, and hatreds of wrongdoers who violated their moral code – just like anybody else.)

    Perhaps in modern politics, then, we will find our template for moral behavior. Well, the Democrats right now are blaming the Republicans for intemperate language that supposedly lead to the recent killings in Tucson. The Republicans, as might be expected, point out behavior just as onerous among their opponents.

    Maybe the thing to do is look to religion for a moral standard. First problem is finding the ‘true’ religion. Good luck with that. Then the next step will be in tracing the history of that religion to see where that standard has taken its practitioners. For example, Christianity has lead to outcomes such as the Crusades, widespread persecution of independent thinkers, and witch-burning.

    Fact is, every human being (just like the Nazis) assumes HE has the proper ‘moral code’ and ‘common sense’ within him. The guy standing next to him – well, he’s not so sure about that guy.

    Meanwhile, our common sense and human decency has brought about Auschwitz (during much of which the world merely stood by and rationalized), world wars, famine, torture, persecution of all stripes, and massive corruption. Every human being on this planet has both participated in (whether they’ll admit it or not) and suffered from (to one degree or another) a collective immorality that amounts to self-hatred.

    Can’t wait to see what a race of machines does with our supposedly ‘core’ values. Goody!

    “people are no damn good”
    I can dig that I come across that way. It’s a complex and difficult subject, and this is merely a blog post. However, “people are no damn good” is not exactly my point. My actual point is that what we see as “human morality” is not only subjective, it is EXTREMELY subjective – to the point of having little or no intrinsic value. To hold “it” forth (whatever “it” is) as some sort of standard for emulation (!!) is ludicrous.

    “Man is the measure of all things.” And that’s the problem. When it comes to our innermost selves, our place, our value and our purpose, we are simply are in no position to do the measuring.

    Thanks for reading all this, and for letting me speak my piece. It’s useful for me, as I suggested earlier, to work out some of these ideas outside of the project I’m working on. And it is likewise useful to me to observe your beliefs and evaluate your responses. (Since it may well be that my presence here is doing nothing for YOU besides providing you with a migrane, let me again express my appreciation.)

  • faria.jeff

    I realize I’ve gone on too long already, but if I might beg a small further indulgence, here’ a bit I forgot to include:

    In our search for an ideal moral template, and common sense, perhaps we need to look back into our own history. Our founding fathers, after all, seem to have had ‘common sense’ in spades. (Tom Paine having used that very phrase as a title for his communications.)

    The framers of our Constitution faced a problem similar to the problem this post describes. Just as a future race of machines seem to need a template for behavior that would keep their actions within ‘acceptable’ bounds, so too did certain men desire to keep a new republic from repeating the excesses of the old.

    I could take the tack that, despite their proposition that “all men are created equal”, many of the men who stood behind these documents owned slaves. I could also point out other hypocrisies, or note the wars and civil injustice that ensued. I could note that, despite the ‘democracy’ that was proposed and the pains the US colonists had suffered at the hands of a king – the first thing these people wanted to do was annoint George Washington as their king! However, the fact remains that this experiment overall worked out quite well, despite its failings. And much of the credit must go to these underlying principles.

    But what speaks more to my point about the Declaration of Independence and the Bill of RIghts and so on is this: The Founders knew that, despite their best efforts, their republic would eventually become corrupt and oppressive. Therefore, they built into their documents an urgent message to ‘alter or abolish’ their government, when the time came.

    So, this is what some of the wisest men this country has known believed: Morality cannot be infallibly codified into law.

    No matter what ‘moral code’ we impose on these supermachines – it WILL fail. It will fail not because of what machines are, but because of who WE really are. Then we will have conflict with them, and because we will be so utterly dependent on them (think of life without the machines we have NOW!), the conflict will be catastrophic.

    Just as all our conflicts have ever been, and will forever be.

  • Phil

    faria.jeff wrote:

    >>Didn’t say your post is both stupefying and amazing.

    Yeah, you might want to check the calibration on those irony sensors.

    >>I am sorry, by the way, that you may have taken what I said somewhat personally.

    Not in the least.

    I think the comparison to Nazis is a bit overwrought. Most of us have a pretty clear idea of the critical deltas between Nazi morality and more mainstream alternatives. HINT: it’s not about walking dogs or going to museums.

    The question of whether any moral code can be perfect or whether they will all ultimately fail is beside the point if you assume, as I do, that a greater than human artificial intelligence is going to show up one of these days. (See my three options. One of those things will happen. Still not clear which one you expect or believe to be preferable.)

    If you assume that a greater than human AI can be realistically avoided, then we are just arguing in circles. I don’t think it can. Assuming an AI will show up, I repeat what I wrote above:

    Personally, I’ll take a machine that has a clear idea of what it means to be good and which is trying to be good, even in a flawed way, over one that has no idea whatsoever. Certainly, there are many potential unintended consequences and many ways even a machine that is trying to be good to us could end up wiping us out. On the other hand, I think there’s a strong case to be made that one of the things machines are much better at than we are is adhering to standards. In any case, trying to produce a moral, empathetic greater-than-human intelligence comes off sounding like a better approach than any of the others I’ve seen so far.

    The stuff about the founders is wide of the mark. I’m not talking about trying to persuade a an AI to follow a set of rules. I’m talking about trying to instill it with the right set of assumptions and tendencies.

    You can argue that those will be wrong and that they will fail, but unless you have an alternative — not to say you have to solve the problem — I don’t see what your critique contributes to the discussion, other than telling us that we’re all doomed.

    In which case, congrats — no one is going to tell you that YOUR outlook isn’t bleak enough!

  • https://me.yahoo.com/a/_9c4hQwi2Igum_04FCcr6PmKUQ31#295aa

    What I don’t understand is why the AI is always viewed as a separate entity. I think it is far more likely that the first AI emerges from a confluence of collective human-machine interaction. We will be in effect the cells of the bodies that make up the AI entities. They will be able to exist without us about as easily as my mind can exist without the cells that make it up or the heart that pumps blood into it. We are the substrate on which these entities will operate or are already operating depending on how loosely you define an AI.

    And if we want to take the scenario of us as bacteria in a swimming pool a little further, we may not like bacteria in our swimming pools, but we like them in our soil, in our guts, in our vats making biodiesel, in our labs helping us develop new drugs. They are useful, necessary. We don’t have to love bacteria to keep them around. AI doesn’t have to love or cherish us to keep us around. And I suspect we will be as intrinsic to their ecology as bacteria are to our ecology.

    /end analogy

  • https://me.yahoo.com/a/_9c4hQwi2Igum_04FCcr6PmKUQ31#295aa

    *You can argue that those will be wrong and that they will fail, but unless you have an alternative — not to say you have to solve the problem — I don’t see what your critique contributes to the discussion, other than telling us that we’re all doomed.*

    There’s nothing wrong with stating “we are all doomed”. But that is not what he is saying. I think he is simply pointing out that our capability to design a friendly AI is very, very poor indeed, and that we greatly overestimate our capabilities in this regard. I believe we have just about as much of a chance of designing friendly AI as single celled organisms had of designing friendly multi-cellular life.

    All we can do is try our best. I’m hopeful that we will have a firm place in the world to come. One level of complexity is built out of the one beneath it, it doesn’t replace it.

    I think we have to admit that human beings will come to grief at the hands of AI, maybe millions. But that humanity as a whole will survive and thrive, though it will change.

  • vtelco

    1300 Number:
    I don’t think that there would be any better than the human brains. And that something can be invented greater than how we are created. Technology advancement enhances our skills but not our hearts and definitely not our souls.
    http://www.veloxtelco.com.au/

  • Joel

    Morality is not logical and involves making compromises between competing moral rules whose weight varies with time and circumstance.

    Not even humans do it well.

    http://abcreditcardpayments.com