At H+, Ben Goertzel has a review of the new Ray Kurzweil bio documentary Transcendent Man. Ben’s review makes me all the more eager to see the film. I’m hoping there’s a screening in the Denver area soon. Here’ the trailer:
This section of the review particularly caught my attention:
“Ray, as you know, I’m involved in a project oriented toward creating a powerful AI system, and if it works as well as I hope, I think it may lead to a Singularity well before your projected date of 2045. And my goal in doing this isn’t just to create an artificial supermind to end scarcity and bring immortality and all that good stuff, it’s also to become one with that supermind. I don’t just want us to build gods, I want us to become gods. But there’s one doubt that often vexes me, and I’d like to know what you think about it. I wonder if there will come a point, when we’ve enhanced our brains enough with advanced technology, when we’ll have to stop and say: OK, that’s all I can do and still remain myself. If I add anything more – if I up my IQ from 500 to 510 – I’ll lose the self-structure and the illusion of will and all the other things that make me Ben Goertzel. I’ll just become some other, godlike mind whose origin in the human ‘Ben Goertzel’ is pretty much irrelevant.”
Ray responded by stating that he felt it would be possible to achieve basically arbitrarily high levels of intelligence while still retaining individuality.
But the moderator of the Q&A session, NPR Correspondent Robert Krulwich (who did an absolutely wonderful job), took up my side. He posited a future scenario where Robert enhanced his brain with the UltimateBrain brain-computer plug-in, and Ray enhanced his brain with the SuperiorBrain brain-computer plug-in. If Robert is 700 part Ultimate Brain and 1 part Robert; and Ray is 700 parts SuperiorBrain and 1 part Ray … i.e., if the human portions of the post-Singularity cyborg beings are minimal and relatively un-utilized … then, in what sense will these creatures really be human? In what sense will they really be Robert and Ray?
Ray responded that they would be human because the UltimateBrain and SuperiorBrain would be built by humans … or built by robots built by humans … so in a sense they would still be human, since they’d be human technology.
Yes, noted Robert, they would still be human in that sense – but that didn’t mean they’d still be Robert and Ray.
I stated my own view, that a point probably will be reached where to progress further, we’ll have to give up our human selves and accept that the role of our human selves has been to give rise to smarter, wiser, greater minds, more capable of creative activity, positive emotion and connection with the universe.
Ray’s (grinning) answer: “And would that be so bad?”
My (smiling) reply: “No.”
I’m having a hard time with Ben’s argument. I don’t see how increasing intelligence can possibly reduce or eliminate individuality. If the UltimateBrain or SuperiorBrain really are “ultimate” or “superior” versions of what we have, then they would have to be much more complex than the brains we have. They might all start out the same, but wouldn’t each instantiation of these programs quickly diversify based on the experiences and preferences of the individual intelligences which “runs its personality” in that environment? And wouldn’t that environment be not just smarter than we are, but massively more complex? And isn’t complexity one of the key contributors to, if not the defining factor behind, individuality?
I consider myself to be much more of an individual today than I was when I was, say, five years old. There’s just a lot more to me that’s different, and that can be different, from other people than there was back then. In some ways, I can comprehend all the motivations and feelings that five-year-old me had. This isn’t exactly the same as Ben’s idea of seeing through the “self- structure and illusion of will,” but I think it’s in the same ball park. I don’t understand why a personality that transcends to a new level of organization would lose individuality in the process. Of course, a new concept of the self — the more sophisticated self — would have to come into play, and I’m not sure what to do with the “Illusion of will.” It’s hard to imagine a meaningful existence without this particular illusion in place. Could intelligent beings can have a meaningful existence without any notion of the reality of their own will? I suppose they could. Could such beings continue to be individual and distinct from each other? I see no reason why they wouldn’t be.
It seems to me that Ben’s argument rests on the notion that individuality is some kind of limitation inherent in our current form. I think not; rather, it is the manifestation of our complexity. A more intelligent and more complex being has the capacity to be more of an individual than would a simpler and slower being.
As to the question of whether a massively intelligent version of me would still be me, going back to the five-year-old example, I am already arguably no longer the same person I used to be. (And in fact you don’t have to go all the way back to age five. I’m pretty different from what I was like at 25. Or even 35.) The real question is one of continuous experience. Even if “I” am no longer anything like what I used to be, it’s still “me” if there is a continuous experience of selfhood. Or even, possibly, if there is a discontinuous experience of selfhood. Before I was a five year old, I was a two year old, and before that a fetus, and before that a zygote. Yet I consider all of these phases to have been “me,” even if I carry forward no conscious memories of those phases they don’t drive my current behavior.
Finally, on the question of whether the superintelligence is “human.” I would just want to know if it is intelligent, curious, humane, joyful, artistic, empathetic — or does it have some transcendent version of each of these qualities? If not, then no, it isn’t human. And I’m not even sure why we would want to head in that direction. But if it does posses those qualities, then it’s human enough for me — and as human as anybody needs to be.