The Speculist: As Human as We Need to Be

logo.jpg

Live to see it.


« FastForward Radio -- Can You Imagine? | Main | Have You Seen This one? »


As Human as We Need to Be

At H+, Ben Goertzel has a review of the new Ray Kurzweil bio documentary Transcendent Man. Ben's review makes me all the more eager to see the film. I'm hoping there's a screening in the Denver area soon. Here' the trailer:

This section of the review particularly caught my attention:

"Ray, as you know, I'm involved in a project oriented toward creating a powerful AI system, and if it works as well as I hope, I think it may lead to a Singularity well before your projected date of 2045. And my goal in doing this isn't just to create an artificial supermind to end scarcity and bring immortality and all that good stuff, it's also to become one with that supermind. I don't just want us to build gods, I want us to become gods. But there's one doubt that often vexes me, and I'd like to know what you think about it. I wonder if there will come a point, when we've enhanced our brains enough with advanced technology, when we'll have to stop and say: OK, that's all I can do and still remain myself. If I add anything more – if I up my IQ from 500 to 510 – I'll lose the self-structure and the illusion of will and all the other things that make me Ben Goertzel. I'll just become some other, godlike mind whose origin in the human 'Ben Goertzel' is pretty much irrelevant."

Ray responded by stating that he felt it would be possible to achieve basically arbitrarily high levels of intelligence while still retaining individuality.

But the moderator of the Q&A session, NPR Correspondent Robert Krulwich (who did an absolutely wonderful job), took up my side. He posited a future scenario where Robert enhanced his brain with the UltimateBrain brain-computer plug-in, and Ray enhanced his brain with the SuperiorBrain brain-computer plug-in. If Robert is 700 part Ultimate Brain and 1 part Robert; and Ray is 700 parts SuperiorBrain and 1 part Ray ... i.e., if the human portions of the post-Singularity cyborg beings are minimal and relatively un-utilized ... then, in what sense will these creatures really be human? In what sense will they really be Robert and Ray?

Ray responded that they would be human because the UltimateBrain and SuperiorBrain would be built by humans ... or built by robots built by humans ... so in a sense they would still be human, since they'd be human technology.

Yes, noted Robert, they would still be human in that sense – but that didn't mean they'd still be Robert and Ray.

I stated my own view, that a point probably will be reached where to progress further, we'll have to give up our human selves and accept that the role of our human selves has been to give rise to smarter, wiser, greater minds, more capable of creative activity, positive emotion and connection with the universe.

Ray's (grinning) answer: "And would that be so bad?"

My (smiling) reply: "No."

I'm having a hard time with Ben's argument. I don't see how increasing intelligence can possibly reduce or eliminate individuality. If the UltimateBrain or SuperiorBrain really are "ultimate" or "superior" versions of what we have, then they would have to be much more complex than the brains we have. They might all start out the same, but wouldn't each instantiation of these programs quickly diversify based on the experiences and preferences of the individual intelligences which "runs its personality" in that environment? And wouldn't that environment be not just smarter than we are, but massively more complex? And isn't complexity one of the key contributors to, if not the defining factor behind, individuality?

I consider myself to be much more of an individual today than I was when I was, say, five years old. There's just a lot more to me that's different, and that can be different, from other people than there was back then. In some ways, I can comprehend all the motivations and feelings that five-year-old me had. This isn't exactly the same as Ben's idea of seeing through the "self- structure and illusion of will," but I think it's in the same ball park. I don't understand why a personality that transcends to a new level of organization would lose individuality in the process. Of course, a new concept of the self -- the more sophisticated self -- would have to come into play, and I'm not sure what to do with the "Illusion of will." It's hard to imagine a meaningful existence without this particular illusion in place. Could intelligent beings can have a meaningful existence without any notion of the reality of their own will? I suppose they could. Could such beings continue to be individual and distinct from each other? I see no reason why they wouldn't be.

It seems to me that Ben's argument rests on the notion that individuality is some kind of limitation inherent in our current form. I think not; rather, it is the manifestation of our complexity. A more intelligent and more complex being has the capacity to be more of an individual than would a simpler and slower being.

As to the question of whether a massively intelligent version of me would still be me, going back to the five-year-old example, I am already arguably no longer the same person I used to be. (And in fact you don't have to go all the way back to age five. I'm pretty different from what I was like at 25. Or even 35.) The real question is one of continuous experience. Even if "I" am no longer anything like what I used to be, it's still "me" if there is a continuous experience of selfhood. Or even, possibly, if there is a discontinuous experience of selfhood. Before I was a five year old, I was a two year old, and before that a fetus, and before that a zygote. Yet I consider all of these phases to have been "me," even if I carry forward no conscious memories of those phases they don't drive my current behavior.

Finally, on the question of whether the superintelligence is "human." I would just want to know if it is intelligent, curious, humane, joyful, artistic, empathetic -- or does it have some transcendent version of each of these qualities? If not, then no, it isn't human. And I'm not even sure why we would want to head in that direction. But if it does posses those qualities, then it's human enough for me -- and as human as anybody needs to be.

Comments

Phil,

Well said.

I think of it in this fashion: There are probably several million characteristics that accurately describe me. There are probably only a tiny handful of those that I consider essential to my identity--the core of who I am. Changing any of the others no more changes who I am than does getting laser eye surgery or having wisdom teeth removed--or simply spending a year in college. (In fact, the latter is more likely to result in some material change to who I am.)

To the extent that superintelligence might change who we are, I would suggest that it would most likely turbocharge the process by which life experiences change who we are. Going back to my college example above: spending a year, or several, in college is certainly likely to change who you are. If future technology allows one to effectively experience a similar amount of "life" (virtual or otherwise) in less time, then it's possible that we'll morph from any given present self to that self's future self more quickly than we otherwise would--but technology would at best facilitate those ways that we allow the world to shape who we are day in and day out now. The technology itself wouldn't be changing who we are.

I have always thought of the 'trans' in Transhuman more in the sense of 'transitional' or 'transformative' than 'transcendent.' If you look at the line of critters who preceded homo sapiens - from Lucy on up - each of them were the 'Humans' of their day. The state-of-the-art for the time. As each 'model' transformed or transitioned into the next, acquiring ever more complexity and ability, they redefined what it meant to 'be Human.' By this perspective, whatever we evolve into - those creatures will 'be Human.' Their existence will define the species for their time. Will we 'approve' of those future Humans? Who knows? Do ya think homo habilis or homo erectus would have 'approved' of us? Or just been scared silly?

I think that Ray's point that he makes in his book is that the changes to our consciousness will gradually happen over a period of a few decades. Since this process will be gradual, there will be continuity in the new patterns of your consciousness. If I were to jump from my 3 year old self to my current 30 year old self, I would say that my 3 year old self would think that this is not the same person (if he was even able to rationalize at this level). The same can be said about a super-intelligent me 30 years from now. That is assuming that change is gradual and even in advent of a benevolent AGI in 5 years, people may still decide to gradually "upgrade" their minds to maintain their sense of self.

My point wasn't that "Ben Goertzel + 10^5 IQ points" wouldn't be an individual ...

It might or might not be an individual in the sense that we currently use that term. The "phenomenal self" that each of us humans have (to use Metzinger's term) might not be something that a superintelligent being would find a use for. But that wasn't my point.

My point was that it might not recognizably have any of the self-structure of "Ben Goertzel" ... even if it were an individual of some kind.

Maybe the human personality and mind architecture are simply not compatible with drastically transhuman levels of intelligence.

The analogies some commenters make to "30 year old humans versus 3 year old humans" are not really on point, because bot of these have human brain architecture and have intelligence within an order of magnitude of ordinary humans..

-- Ben G

"I'm having a hard time with Ben's argument. I don't see how increasing intelligence can possibly reduce or eliminate individuality. If the UltimateBrain or SuperiorBrain really are "ultimate" or "superior" versions of what we have, then they would have to be much more complex than the brains we have. They might all start out the same, but wouldn't each instantiation of these programs quickly diversify based on the experiences and preferences of the individual intelligences which "runs its personality" in that environment? And wouldn't that environment be not just smarter than we are, but massively more complex? And isn't complexity one of the key contributors to, if not the defining factor behind, individuality?"

I agree wholly. And I'd add this argument: Our sense of self is utterly contingent on continuity...memory. Say we achieve the powerful Singularity intelligences described in the interview, either all at once, or, as I anticipate, gradually, with better and better brain augments. What happens when we can remember, I mean REALLY remember in detail our lives before augments, during the gradual buildup of augmentation, and then with remarkably sharp, full memories as we enter the Singularity with our vast, upgraded intelligences?

I'm as certain as anything that we would experience a much sharper sense of self than we do now.

Ben makes a good point that the architecture of the Jonathan v5000 brain may be so radically different that it is basically unrecognizable to my v1 self. If I slowly modified my mind architecture 1 step along the way, I probably wouldn't complain, because each incremental self would only be slightly different (perhaps), but the v5000 self would probably have almost nothing in common with my v1 self. Would I then be an entirely different entity?

Post a comment

(Comments are moderated, and sometimes they take a while to appear. Thanks for waiting.)






Be a Speculist

Share your thoughts on the future with more than

70,000

Speculist readers. Write to us at:

speculist1@yahoo.com

(More details here.)



Blogroll



Categories

Powered by
Movable Type 3.2