<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: As Human as We Need to Be</title>
	<atom:link href="https://blog.speculist.com/artificial_intelligence/as-human-as-we.html/feed" rel="self" type="application/rss+xml" />
	<link>https://blog.speculist.com/artificial_intelligence/as-human-as-we.html</link>
	<description>Live to see it.</description>
	<lastBuildDate>Thu, 16 Dec 2021 08:21:00 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.6.1</generator>
	<item>
		<title>By: Jonathan</title>
		<link>https://blog.speculist.com/artificial_intelligence/as-human-as-we.html#comment-4538</link>
		<dc:creator>Jonathan</dc:creator>
		<pubDate>Mon, 11 May 2009 11:01:27 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=1883#comment-4538</guid>
		<description><![CDATA[Ben makes a good point that the architecture of the Jonathan v5000 brain may be so radically different that it is basically unrecognizable to my v1 self.  If I slowly modified my mind architecture 1 step along the way, I probably wouldn&#039;t complain, because each incremental self would only be slightly different (perhaps), but the v5000 self would probably have almost nothing in common with my v1 self. Would I then be an entirely different entity?]]></description>
		<content:encoded><![CDATA[<p>Ben makes a good point that the architecture of the Jonathan v5000 brain may be so radically different that it is basically unrecognizable to my v1 self.  If I slowly modified my mind architecture 1 step along the way, I probably wouldn&#8217;t complain, because each incremental self would only be slightly different (perhaps), but the v5000 self would probably have almost nothing in common with my v1 self. Would I then be an entirely different entity?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Sally Morem</title>
		<link>https://blog.speculist.com/artificial_intelligence/as-human-as-we.html#comment-4537</link>
		<dc:creator>Sally Morem</dc:creator>
		<pubDate>Sun, 10 May 2009 19:03:33 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=1883#comment-4537</guid>
		<description><![CDATA[&quot;I&#039;m having a hard time with Ben&#039;s argument. I don&#039;t see how increasing intelligence can possibly reduce or eliminate individuality. If the UltimateBrain or SuperiorBrain really are &quot;ultimate&quot; or &quot;superior&quot; versions of what we have, then they would have to be much more complex than the brains we have. They might all start out the same, but wouldn&#039;t each instantiation of these programs quickly diversify based on the experiences and preferences of the individual intelligences which &quot;runs its personality&quot; in that environment? And wouldn&#039;t that environment be not just smarter than we are, but massively more complex? And isn&#039;t complexity one of the key contributors to, if not the defining factor behind, individuality?&quot;

I agree wholly.  And I&#039;d add this argument: Our sense of self is utterly contingent on continuity...memory.  Say we achieve the powerful Singularity intelligences described in the interview, either all at once, or, as I anticipate, gradually, with better and better brain augments.  What happens when we can remember, I mean REALLY remember in detail our lives before augments, during the gradual buildup of augmentation, and then with remarkably sharp, full memories as we enter the Singularity with our vast, upgraded intelligences?

I&#039;m as certain as anything that we would experience a much sharper sense of self than we do now.]]></description>
		<content:encoded><![CDATA[<p>&#8220;I&#8217;m having a hard time with Ben&#8217;s argument. I don&#8217;t see how increasing intelligence can possibly reduce or eliminate individuality. If the UltimateBrain or SuperiorBrain really are &#8220;ultimate&#8221; or &#8220;superior&#8221; versions of what we have, then they would have to be much more complex than the brains we have. They might all start out the same, but wouldn&#8217;t each instantiation of these programs quickly diversify based on the experiences and preferences of the individual intelligences which &#8220;runs its personality&#8221; in that environment? And wouldn&#8217;t that environment be not just smarter than we are, but massively more complex? And isn&#8217;t complexity one of the key contributors to, if not the defining factor behind, individuality?&#8221;</p>
<p>I agree wholly.  And I&#8217;d add this argument: Our sense of self is utterly contingent on continuity&#8230;memory.  Say we achieve the powerful Singularity intelligences described in the interview, either all at once, or, as I anticipate, gradually, with better and better brain augments.  What happens when we can remember, I mean REALLY remember in detail our lives before augments, during the gradual buildup of augmentation, and then with remarkably sharp, full memories as we enter the Singularity with our vast, upgraded intelligences?</p>
<p>I&#8217;m as certain as anything that we would experience a much sharper sense of self than we do now.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Ben Goertzel</title>
		<link>https://blog.speculist.com/artificial_intelligence/as-human-as-we.html#comment-4536</link>
		<dc:creator>Ben Goertzel</dc:creator>
		<pubDate>Sat, 09 May 2009 16:05:15 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=1883#comment-4536</guid>
		<description><![CDATA[My point wasn&#039;t that &quot;Ben Goertzel + 10^5 IQ points&quot; wouldn&#039;t be an individual ...
 
It might or might not be an individual in the sense that we currently use that term.  The &quot;phenomenal self&quot; that each of us humans have (to use Metzinger&#039;s term) might not be something that a superintelligent being would find a use for.  But that wasn&#039;t my point.

My point was that it might not recognizably have any of the self-structure of &quot;Ben Goertzel&quot; ... even if it were an individual of some kind.

Maybe the human personality and mind architecture are simply not compatible with drastically transhuman levels of intelligence.

The analogies some commenters make to &quot;30 year old humans versus 3 year old humans&quot; are not really on point, because bot of these have human brain architecture and have intelligence within an order of magnitude of ordinary humans..

-- Ben G]]></description>
		<content:encoded><![CDATA[<p>My point wasn&#8217;t that &#8220;Ben Goertzel + 10^5 IQ points&#8221; wouldn&#8217;t be an individual &#8230;</p>
<p>It might or might not be an individual in the sense that we currently use that term.  The &#8220;phenomenal self&#8221; that each of us humans have (to use Metzinger&#8217;s term) might not be something that a superintelligent being would find a use for.  But that wasn&#8217;t my point.</p>
<p>My point was that it might not recognizably have any of the self-structure of &#8220;Ben Goertzel&#8221; &#8230; even if it were an individual of some kind.</p>
<p>Maybe the human personality and mind architecture are simply not compatible with drastically transhuman levels of intelligence.</p>
<p>The analogies some commenters make to &#8220;30 year old humans versus 3 year old humans&#8221; are not really on point, because bot of these have human brain architecture and have intelligence within an order of magnitude of ordinary humans..</p>
<p>&#8211; Ben G</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Jonathan</title>
		<link>https://blog.speculist.com/artificial_intelligence/as-human-as-we.html#comment-4535</link>
		<dc:creator>Jonathan</dc:creator>
		<pubDate>Fri, 08 May 2009 15:36:14 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=1883#comment-4535</guid>
		<description><![CDATA[I think that Ray&#039;s point that he makes in his book is that the changes to our consciousness will gradually happen over a period of a few decades.  Since this process will be gradual, there will be continuity in the new patterns of your consciousness.  If I were to jump from my 3 year old self to my current 30 year old self, I would say that my 3 year old self would think that this is not the same person (if he was even able to rationalize at this level).  The same can be said about a super-intelligent me 30 years from now.  That is assuming that change is gradual and even in advent of a benevolent AGI in 5 years, people may still decide to gradually &quot;upgrade&quot; their minds to maintain their sense of self.]]></description>
		<content:encoded><![CDATA[<p>I think that Ray&#8217;s point that he makes in his book is that the changes to our consciousness will gradually happen over a period of a few decades.  Since this process will be gradual, there will be continuity in the new patterns of your consciousness.  If I were to jump from my 3 year old self to my current 30 year old self, I would say that my 3 year old self would think that this is not the same person (if he was even able to rationalize at this level).  The same can be said about a super-intelligent me 30 years from now.  That is assuming that change is gradual and even in advent of a benevolent AGI in 5 years, people may still decide to gradually &#8220;upgrade&#8221; their minds to maintain their sense of self.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: David A. Young</title>
		<link>https://blog.speculist.com/artificial_intelligence/as-human-as-we.html#comment-4534</link>
		<dc:creator>David A. Young</dc:creator>
		<pubDate>Thu, 07 May 2009 11:06:50 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=1883#comment-4534</guid>
		<description><![CDATA[I have always thought of the &#039;trans&#039; in Transhuman more in the sense of &#039;transitional&#039; or &#039;transformative&#039; than &#039;transcendent.&#039;  If you look at the line of critters who preceded homo sapiens - from Lucy on up - each of them were the &#039;Humans&#039; of their day.  The state-of-the-art for the time.  As each &#039;model&#039; transformed or transitioned into the next, acquiring ever more complexity and ability, they redefined what it meant to &#039;be Human.&#039;  By this perspective, whatever we evolve into - those creatures will &#039;be Human.&#039;  Their existence will define the species for their time.  Will we &#039;approve&#039; of those future Humans?  Who knows?  Do ya think homo habilis or homo erectus would have &#039;approved&#039; of us?  Or just been scared silly?]]></description>
		<content:encoded><![CDATA[<p>I have always thought of the &#8216;trans&#8217; in Transhuman more in the sense of &#8216;transitional&#8217; or &#8216;transformative&#8217; than &#8216;transcendent.&#8217;  If you look at the line of critters who preceded homo sapiens &#8211; from Lucy on up &#8211; each of them were the &#8216;Humans&#8217; of their day.  The state-of-the-art for the time.  As each &#8216;model&#8217; transformed or transitioned into the next, acquiring ever more complexity and ability, they redefined what it meant to &#8216;be Human.&#8217;  By this perspective, whatever we evolve into &#8211; those creatures will &#8216;be Human.&#8217;  Their existence will define the species for their time.  Will we &#8216;approve&#8217; of those future Humans?  Who knows?  Do ya think homo habilis or homo erectus would have &#8216;approved&#8217; of us?  Or just been scared silly?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Gramarye</title>
		<link>https://blog.speculist.com/artificial_intelligence/as-human-as-we.html#comment-4533</link>
		<dc:creator>Gramarye</dc:creator>
		<pubDate>Wed, 06 May 2009 20:23:22 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=1883#comment-4533</guid>
		<description><![CDATA[Phil,

Well said.

I think of it in this fashion: There are probably several million characteristics that accurately describe me.  There are probably only a tiny handful of those that I consider essential to my identity--the core of who I am.  Changing any of the others no more changes who I am than does getting laser eye surgery or having wisdom teeth removed--or simply spending a year in college.  (In fact, the latter is more likely to result in some material change to who I am.)

To the extent that superintelligence might change who we are, I would suggest that it would most likely turbocharge the process by which life experiences change who we are.  Going back to my college example above: spending a year, or several, in college is certainly likely to change who you are.  If future technology allows one to effectively experience a similar amount of &quot;life&quot; (virtual or otherwise) in less time, then it&#039;s possible that we&#039;ll morph from any given present self to that self&#039;s future self more quickly than we otherwise would--but technology would at best facilitate those ways that we allow the world to shape who we are day in and day out now.  The technology itself wouldn&#039;t be changing who we are.]]></description>
		<content:encoded><![CDATA[<p>Phil,</p>
<p>Well said.</p>
<p>I think of it in this fashion: There are probably several million characteristics that accurately describe me.  There are probably only a tiny handful of those that I consider essential to my identity&#8211;the core of who I am.  Changing any of the others no more changes who I am than does getting laser eye surgery or having wisdom teeth removed&#8211;or simply spending a year in college.  (In fact, the latter is more likely to result in some material change to who I am.)</p>
<p>To the extent that superintelligence might change who we are, I would suggest that it would most likely turbocharge the process by which life experiences change who we are.  Going back to my college example above: spending a year, or several, in college is certainly likely to change who you are.  If future technology allows one to effectively experience a similar amount of &#8220;life&#8221; (virtual or otherwise) in less time, then it&#8217;s possible that we&#8217;ll morph from any given present self to that self&#8217;s future self more quickly than we otherwise would&#8211;but technology would at best facilitate those ways that we allow the world to shape who we are day in and day out now.  The technology itself wouldn&#8217;t be changing who we are.</p>
]]></content:encoded>
	</item>
</channel>
</rss>
