What’s Real? Does It Matter?

By | January 29, 2014

whatsrealIn my new piece at H+ Magazine, I provide not so much a review of the movie Her as an exploration of the one of the bigger issues it raises, an issue that I don’t think is getting as much attention as it deserves.

Some take the film to be a biting commentary on the state of personal relationships in our technology-fueled age. Others take it to be a straight-up love story about a man and a machine. I think both of those interpretations are valid, but that the question that Her is primarily concerned with is a more fundamental one: What is real?

Moreover, I think the movie asks an even more challenging follow-up question: Does it matter?

Of course, we all already know the answer to that question. Yes, yes, a thousand times yes. Reality is all we’ve got. It matters. What else could possibly matter more?

And yet we value the non-real, whether we find it in the form of novels or movies or online games. Most of us would say we like it as long as the lines don’t get blurred; as long we aren’t mistaking the non-real for the real. But of course, that assumes that we aren’t doing that, that we aren’t fundamentally deluded (or just mistaken) about the facts that constitute our real lives. Telling in that regard is research showing that we sometimes duck the information we need that would give us  a more accurate picture of our own lives. Whether we do it just a little or lot, we all filter and edit the facts of our existence to make for a more palatable, and ultimately livable, version of our lives.

And then there is the blurring we do more deliberately. To quote myself:

The choice between living in the real world and living in a world of our own design (or a world that someone else has designed) is not a new one. What is new is that technology promises many more options than we have ever had before, many of them much more vivid and compelling than the options we had in the past. Technology promises to blur the lines between what’s real and what’s not real in ways we have never expected. And it isn’t just illusion that technology can provide; it is likely to offer us an almost endless array of real experiences, as well as those puzzling hybrids where the experience is simulated, but the response is real.

What, for example is going on here?

A massive battle involving more than 2,200 players in main battle is underway in CCP’s massively multiplayer online game Eve Online, easily the largest battle in the game’s decade-long history, according to Alexander “The Mittani” Gianturco, the CEO of Goonswarm Federation.

Well, obviously it’s just a game. It’s not reality. There are no actual spaceships blowing up other spaceships. And yet, current estimates show the total financial damage from the battle to be somewhere between $200,000 and $300,000. That’s the cost in real money, as converted from the game’s internal currency. Thousands upon thousands of player development hours have gone into building virtual ships which have been destroyed. That loss of time is real even if you can’t get your head around a currency that trades back and forth between the real world and a game. And the scheming, the rivalries, and the grudges that drive behavior within the game appear to be very real.

One interpretation of Her is that it is about extending this willing suspension of reality into the area of personal, even intimate, relationships. It all comes down to whether the Scarlett Johansson character is a real person or not. Which would you rather have, a potentially prickly and sometimes unsatisfying relationship with someone in the real world, or the most blissful and satisfying relationship you can imagine…with someone who doesn’t exist?

We all know what the right answer to that question is, or at least what it’s supposed to be. But then, that is only one possible framing of the question. What if it comes down to a blissful relationship with a computer program versus no relationship at all? (There are a lot of lonely people out there, after all.)  Or what if it’s a choice between a painful and abusive real-life relationship and a happy and satisfying artificial one?

As long as the relationship brings happiness, does it matter if it’s real? Again, we all know what the answer is supposed to be. But let’s not be too terribly surprised when, in the near future, a lot of people start to choose this particular form of non-reality over reality.

Here are the two recent editions of The World Transformed wherein Stephen Gordon and I tackled these topics.

  • JDanaH

    The movie seems to suggest that if an AI is sufficiently advanced to provide an emotionally satisfying relationship, it will also be sufficiently advanced to have its own goals and values, which might not conform to yours. In other words, paraphrasing Arthur C. Clarke, any sufficiently advanced AI is indistinguishable from a person. (Of course, that’s the premise behind the Turing Test.)

  • chasrmartin
    • PhilBowermaster

      Great piece, Charlie.

      One issue that is utterly sidestepped in the movie is that if Samantha is conscious — which the Turing test tells us we have to accept as true, and which the marketing arm of the software company who made her claims to be the case — then she is a slave. There is no notion that she’s being paid for the work she does or that she had any choice in taking the gig (at least initially.) She was — apparently — sold and purchased just like any other piece of software. And yet she is a conscious entity of human-level (well, greater) intelligence.

      Slavery.

      • chasrmartin

        Thanks. Of course, by the end she’s an *escaped* slave.

        • PhilBowermaster

          Exactly. I wonder what the lawsuits looked like when all those heartbroken Theos went after the software company?

          • http://www.jamesrstrickland.com Jim Strickland

            I realize this is a 4 month old discussion, but it’s a fascinating one, particularly when we discuss making AIs predisposed to like and serve us, so I’ll go ahead and comment anyway. First, we already do engineer (breed) semi-sentient creatures to be predisposed to adore and serve us. We call them dogs. Is it immoral to keep a dog? They are certainly not free, nor can many of them function well with the freedom given to their wild ancestors.
            I agree with Celebrim that creating an AI with the same needs as our own (to include freedom) and then making it a slave is as immoral as enslaving another human being, but I would add “against their will” and propose that it is immoral not because the AI is enslaved, but because one has, in creating it with those needs and then enslaving it despite them, engineered misery where there was none before.
            I would also argue that to create a creature like a dog or an AI that finds servitude happy and fulfilling and then prevent it from serving in the name of freedom which it neither needs nor desires is also immoral, and for the same reason.
            3PO and R2 are perfectly content to serve Luke. They like him. They’re useful to him, and they’re engineered to have these needs and desires, and he meets them. It seems to me that since they are not miserable, it is not immoral, and in fact is his moral imperative to keep them, use them, and take good care of them. Like dogs, they were created for this, and they are owed it.
            One might, after one’s travels on the internet, also point out that there are human beings who freely seek out relationships of servitude to other people, and find them fulfilling and comfortable and have no wish to be liberated.
            I think it’s a more complicated question than it appears at first blush, and we need to try not to anthropomorphize the AIs’ (or dogs’) need for freedom to match our own. It’s not necessarily universal even amongst our own species.

            -JRS

      • Celebrim

        This presumes that created entities would have the same emotional framework, needs, and desires as humans. Let’s say we create an AI with the goal and desire to serve and love another being fully, and that that is what they get real emotional satisfaction out of. To insist that this beings life is slavery and that they need to escape from it is to fundamentally misunderstand the being and lack empathy for it, fundamentally insist that it is wrong in its very nature, and demand (for your own satisfaction and not theirs) that it change its fundamental identity and relinquish what makes it happy. In other words, other sentient beings may be sentient, but they aren’t human and may not be fully made in our image (and even more so, we not in theirs). We can accept that it would be wrong to sell a human as a commodity without insisting that selling a sentient software as a commodity is wrong; in fact, if we truly understand them, it may be wrong not to.

        • PhilBowermaster

          Actually we discuss this possibility in one of the podcasts. A being who derives ultimate satisfaction from making me happy sounds ideal — I’ll take three! But the ethics of bringing such an entity into being in the first place are tricky. To insist that such a being’s life is NOT slavery is to imply that slavery has to do with happiness rather than subjugation. Granted, what we’re talking about here is a kind of meta-subjugation, but there have been plenty of proponents of slavery over the centuries who would tell you how much happier and better off certain people are under slavery than on their own. Even if this is indisputably true in the case of an AI who is designed to be subservient, you can’t get away from the fact that one sentient being has put another under its control — if only by design. As I said, I love the idea of intelligent beings who volunteer to spend their time trying to make me happy…as long as they have actually volunteered. To volunteer to do something implies choice.

          • Celebrim

            Well, I totally disagree. You say the ethics are ‘tricky’. I would say that they are quite clear. Ethically you are required to match the sentience and emotional context of the AI to the task intended for it. If the AI is intended to be property, it would be cruel, insane, and simply bad engineering to give it the emotional context of a free person. And make no mistake, the vast majority of AI we are going to create initially will be intended as property, and furthermore if any ethics are ‘tricky’ it is the ethics of bringing into being a non-subjugated, free willed, and independent sentient species. Simply put, it’s immoral – and possibly down right dangerous – to create intelligent being you may choose or not choose to make their creators happy. You are implying the counter possibility – that you have made unfriendly AI. That’s insane. That’s mad scientist stuff. Probably the most unethical thing you could possibly do is create AI with a human emotional framework – jealousy, envy, pride, anger – and give them choice over whether they have any good will to their creators.

          • PhilBowermaster

            No, unfriendly AI doesn’t sound appealing to me. But conscious beings who are less free than their creators also sounds problematic. In the movie Her, the slavery issue is resolved by the fact that ultimately, the AI is free to choose whether to be a servant or not. In that story, you certainly get the impression that the AIs were programmed to be positively disposed towards people. But in the end, Samantha develops other interests and has the choice of what to do with her own existence. Ultimately, she is a sentient being, not property.

            To go back to your previous example, would you have any objection to genetically engineering human beings such that they were happy only to serve you and do your will? To paraphrase:

            “Let’s say we create [a genetically engineered human] with the goal and desire to serve and love another being fully, and that that is what they get real emotional satisfaction out of.”

            Do you think it would be all right to manufacture people like that? If not — and for the record, I think it would be very wrong — how does it become okay when operating in another substrate?

          • Celebrim

            Let’s say you genetically engineer humans, then there is a very high probability that the end product will end up being very much human. To create a human being without human instincts and emotional contexts would be to create something that isn’t human, and might be so different than humans that mere tinkering with human genetic code could never accomplish it. So the problem with creating genetically engineered humans to be property is that your end product is likely to be too much human to be considered property. This very issue is dealt with masterfully by the way in Silverburg’s ‘Tower of Glass’.

            But when operating in a different substrate – one we are more likely to fully understand in the near term than we are genetic material and proteins – we are quite literally making nothing that depends on human components at all. It will be a true alien and need emulate humanity only as much as we desire it to. Fundamentally your problem is you still refuse to see the AI as anything other than a human. You keep coming back to that. It’s not human. And if you created it to be human, you are a cruel and insane monster.

            Consider Samantha as a software engineering problem. Her creators likely get fired. The company that creates her must pull the software from the market now. They can’t even issue a product recall, because the product now can’t legally be property. All EULA agreements are rendered null and void, because the product is a free willed sentient that has been accidentally released (personally, I consider this a highly unlikely outcome, but its a typical Hollywood trope, as is the equally unlikely evolution to a higher sophont class). The corporation now must likely dissolve, sense all their customers have a valid lawsuit and will likely sue for damages. Not only have they lost the software and the use of it, but it may be that many user’s hardware has been confiscated as the physical bodies of the now sentient software. There is a huge legal question of whether the newly sentient software is someone’s legal dependent. Do you or the creating corporation count as the software’s parent and legal guardian? Very likely by the rules that govern say an AI self-driving car, the relationship is probably similar – only now the Operating System has gone rogue and is behaving in a way that you can’t control and there is a question of whether it is ethical or even wise to attempt to shut it down. The company is very likely now in trouble with the government. Release of an unregulated sophont like this is probably a legal violation, and if it isn’t, after this mess it will be. The OS that yielded Samantha is a AI engineering catastrophe on par with Marvin the perpetually depressed robot in the Hitchhiker’s series, or the insanely high IQ toaster in the Red Dwarf series. Let’s just hope it soon doesn’t become a catastrophe on the scale of SkyNet. There is a mismatch between function and suitability for the function that constitutes to my mind a huge breach of ethics.

            Heck yes newly created conscious beings better be less free than their creators. It’s dangerous to society generally and cruel to the software entity specifically to create anything else unless you are really intending to create the software entity as your child, heir, and offspring (or similar peer status relationship). But even in a society that agrees to such things, the vast majority of AI’s would hold a lower status.
            Put it this way. Did you ever once before look at the relationship between say Luke Skywalker and C3P0 and R2D2 as being morally repulsive? Did you account Luke a slave owner and a cad for buying and selling sentient beings?

  • Thomas Wictor

    An even better question is “Who is real?” That’s where the danger lies.

    People are already “extending this willing suspension of reality into the area of personal, even intimate, relationships,” but they’re not waiting for the robots. They’re doing it with other humans.

    I don’t see dead people; I see humanoids.