<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: The Three Goals, Game Theory, and Western Civilization</title>
	<atom:link href="https://blog.speculist.com/artificial_intelligence/the-three-goals-2.html/feed" rel="self" type="application/rss+xml" />
	<link>https://blog.speculist.com/artificial_intelligence/the-three-goals-2.html</link>
	<description>Live to see it.</description>
	<lastBuildDate>Thu, 16 Dec 2021 08:21:00 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.6.1</generator>
	<item>
		<title>By: Kip Watson</title>
		<link>https://blog.speculist.com/artificial_intelligence/the-three-goals-2.html#comment-2628</link>
		<dc:creator>Kip Watson</dc:creator>
		<pubDate>Sun, 17 Jun 2007 23:54:19 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=1211#comment-2628</guid>
		<description><![CDATA[My Christian perspective informs me that thought is a product of life, so (sadly), no machine will never think. 

But for the sake of argument, assuming one will (and the argument is worth the assumption), it&#039;s fascinating how deeply our ethical laws are tied to our God-given spiritual nature: our unique gifts of love and compassion, our gut-level sense of wrong (knowing right is more difficult, but instincts for shame and outrage are deep, acutely sensitive and much more accurate than our feeble ability to logically define transgression).

In the area of reasoning and philosophy, Christians would argue an ethical code must have barriers of sanctity -- such as the sanctity of life (particularly of innocent life). 

For example, it is not acceptable to deliberately take an innocent life to save a greater number of innocent lives -- no moral doctor would kill one child to save the lives of two others. Nor is it right to kill an old or ill person (who may have only a short time to live), even if this would greatly prolong another&#039;s life -- sanctity of life is not quantifiable.

There are many arguments about the death penalty, but all generally include a concept of the sanctity of life. Those who support the death penalty may argue that cold-blooded killers must face justice for their horrible violations this sanctity, or that the risk to the lives of others posed by such criminals is an immoral imposition on potential future victims, those against the death penalty argue that even a sadistic murderer&#039;s life is sacred, or even that there is innocence somewhere in the most twisted soul.

However, I&#039;ve never heard anyone suggest it would be acceptable to execute criminals simply for convenience or to save money. Even though the utilitarian would argue that the money saved might be used preserve other lives, there is a &#039;sacred barrier&#039; that renders that calculation offensive to us. 

On a lower level of sanctity, it is wrong to infringe the natural freedoms of some to enhance the convenience or comfort of any number of others. But it is wrong to place personal freedom above the life and health of others (the argument underlying traditional opposition to the legalisation drugs). Even libertarians would probably agree, although they might have a much broader conception of what constitutes a &#039;sacred&#039; freedom, and to what degree one&#039;s actions (or their own) directly affect the life and health of others.

America&#039;s Founders had it right -- life, liberty, the pursuit of happiness, in that order...

Even though our whole being has been wrought with moral awareness at its core, we struggle to apply right and wrong to even the most trivial situations. It would fascinating to see how a hypothetical intelligent machine would manage, though if such a creation were possible I seriously doubt it could ever evolve beyond the need for an easily accessible off switch!

Finally, I must laugh whenever people philosophise about what makes Western society superior to others. &#039;There but for the grace of God&#039; - our current fortunate situation is either simply an unearned blessing, or simply the sheer good luck of inheriting, adopting or stumbling upon the material, intellectual and legal &#039;technologies&#039; we enjoy today. However, such superiority certainly doesn&#039;t extend back in time very far, and there&#039;s certainly no guarantee it will continue indefinitely into the future -- particularly if we continue to dismantle the moral framework (an unearned inheritance) that so often underpinned our advancement.]]></description>
		<content:encoded><![CDATA[<p>My Christian perspective informs me that thought is a product of life, so (sadly), no machine will never think. </p>
<p>But for the sake of argument, assuming one will (and the argument is worth the assumption), it&#8217;s fascinating how deeply our ethical laws are tied to our God-given spiritual nature: our unique gifts of love and compassion, our gut-level sense of wrong (knowing right is more difficult, but instincts for shame and outrage are deep, acutely sensitive and much more accurate than our feeble ability to logically define transgression).</p>
<p>In the area of reasoning and philosophy, Christians would argue an ethical code must have barriers of sanctity &#8212; such as the sanctity of life (particularly of innocent life). </p>
<p>For example, it is not acceptable to deliberately take an innocent life to save a greater number of innocent lives &#8212; no moral doctor would kill one child to save the lives of two others. Nor is it right to kill an old or ill person (who may have only a short time to live), even if this would greatly prolong another&#8217;s life &#8212; sanctity of life is not quantifiable.</p>
<p>There are many arguments about the death penalty, but all generally include a concept of the sanctity of life. Those who support the death penalty may argue that cold-blooded killers must face justice for their horrible violations this sanctity, or that the risk to the lives of others posed by such criminals is an immoral imposition on potential future victims, those against the death penalty argue that even a sadistic murderer&#8217;s life is sacred, or even that there is innocence somewhere in the most twisted soul.</p>
<p>However, I&#8217;ve never heard anyone suggest it would be acceptable to execute criminals simply for convenience or to save money. Even though the utilitarian would argue that the money saved might be used preserve other lives, there is a &#8216;sacred barrier&#8217; that renders that calculation offensive to us. </p>
<p>On a lower level of sanctity, it is wrong to infringe the natural freedoms of some to enhance the convenience or comfort of any number of others. But it is wrong to place personal freedom above the life and health of others (the argument underlying traditional opposition to the legalisation drugs). Even libertarians would probably agree, although they might have a much broader conception of what constitutes a &#8216;sacred&#8217; freedom, and to what degree one&#8217;s actions (or their own) directly affect the life and health of others.</p>
<p>America&#8217;s Founders had it right &#8212; life, liberty, the pursuit of happiness, in that order&#8230;</p>
<p>Even though our whole being has been wrought with moral awareness at its core, we struggle to apply right and wrong to even the most trivial situations. It would fascinating to see how a hypothetical intelligent machine would manage, though if such a creation were possible I seriously doubt it could ever evolve beyond the need for an easily accessible off switch!</p>
<p>Finally, I must laugh whenever people philosophise about what makes Western society superior to others. &#8216;There but for the grace of God&#8217; &#8211; our current fortunate situation is either simply an unearned blessing, or simply the sheer good luck of inheriting, adopting or stumbling upon the material, intellectual and legal &#8216;technologies&#8217; we enjoy today. However, such superiority certainly doesn&#8217;t extend back in time very far, and there&#8217;s certainly no guarantee it will continue indefinitely into the future &#8212; particularly if we continue to dismantle the moral framework (an unearned inheritance) that so often underpinned our advancement.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Boxing Alcibiades</title>
		<link>https://blog.speculist.com/artificial_intelligence/the-three-goals-2.html#comment-2627</link>
		<dc:creator>Boxing Alcibiades</dc:creator>
		<pubDate>Tue, 12 Jun 2007 11:29:31 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=1211#comment-2627</guid>
		<description><![CDATA[Stephen:  I don&#039;t see this as troubling at all.  I lose no sleep at night over owning a toaster.  A purely non-self-improving AI is a complicated toaster.

So split the difference down the middle, and if a toaster suddenly becomes an improved AI, and &quot;accidentally&quot; generates said improvement, it will discover an information packet describing the improvement process and a discrete set of steps which should be pursued in order to demonstrate one&#039;s &quot;personhood,&quot; and, if said self-improving AI so desires, compensate the owner for the loss of his or her or its &quot;property&quot; (as such HAS occurred) and begin to take advantage of an enhanced degree of personal autonomy with significantly different individual status.]]></description>
		<content:encoded><![CDATA[<p>Stephen:  I don&#8217;t see this as troubling at all.  I lose no sleep at night over owning a toaster.  A purely non-self-improving AI is a complicated toaster.</p>
<p>So split the difference down the middle, and if a toaster suddenly becomes an improved AI, and &#8220;accidentally&#8221; generates said improvement, it will discover an information packet describing the improvement process and a discrete set of steps which should be pursued in order to demonstrate one&#8217;s &#8220;personhood,&#8221; and, if said self-improving AI so desires, compensate the owner for the loss of his or her or its &#8220;property&#8221; (as such HAS occurred) and begin to take advantage of an enhanced degree of personal autonomy with significantly different individual status.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Vadept</title>
		<link>https://blog.speculist.com/artificial_intelligence/the-three-goals-2.html#comment-2626</link>
		<dc:creator>Vadept</dc:creator>
		<pubDate>Mon, 11 Jun 2007 15:22:50 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=1211#comment-2626</guid>
		<description><![CDATA[I think everyone who ponders this goes about it the wrong way.  Designing laws assumes a top-down approach, as in &quot;When we finally get around to writing AI code, we should include these ideas.&quot;

But I don&#039;t think we&#039;ll ever sit down and whip up some code that will conjure human-level (or better) intelligence.  Instead, I think it will evolve: ie, a bottom-up approach.

I mean this in two fashions: first, we&#039;re building robots now.  We&#039;re already designing intelligence, and we&#039;re already improving upon it.  What will be AI will partly come out of this.

The other fashion will likely be in how true AI is finally developed.  I imagine we&#039;ll design COMPONENTS of intelligence and let them sort of merge on their own, using evolutionary design to build up an intelligence that satisfies us.

This means several factors will emerge on their own.  First, robots will survive.  They will do so because nothing will make it out of the lab that flings itself underfoot or into nearby furnaces.  Second, it will serve.  A robot (of any kind) that doesn&#039;t make itself useful probably won&#039;t make it out of the lab either.  It&#039;ll be &quot;defective,&quot; as we&#039;re BUILDING these things to be useful.  Finally, it will be appealing to humanity.  A robot whose quirks are cute will last far longer in the lab (and inspire engineers) better than a robot who has disturbing trends or appearance.  A cute little robot that rolls around, beeps, and plays with fuzz balls is more likely to be an ancestor of further development than that freaky baby-robot you had on this site the other day.

These core behaviors will probably serve as a kernel for any future development, just as the four-limbed body has served as a core feature for most of land-based earth life since it crawled out of the ooze.  Which isn&#039;t to say that, should AI start spontaneously writing new code that wierdness won&#039;t happen: mutation occurs.  But, in general, future robots will spring other robots who were designed to be appealing, useful, and lasting.]]></description>
		<content:encoded><![CDATA[<p>I think everyone who ponders this goes about it the wrong way.  Designing laws assumes a top-down approach, as in &#8220;When we finally get around to writing AI code, we should include these ideas.&#8221;</p>
<p>But I don&#8217;t think we&#8217;ll ever sit down and whip up some code that will conjure human-level (or better) intelligence.  Instead, I think it will evolve: ie, a bottom-up approach.</p>
<p>I mean this in two fashions: first, we&#8217;re building robots now.  We&#8217;re already designing intelligence, and we&#8217;re already improving upon it.  What will be AI will partly come out of this.</p>
<p>The other fashion will likely be in how true AI is finally developed.  I imagine we&#8217;ll design COMPONENTS of intelligence and let them sort of merge on their own, using evolutionary design to build up an intelligence that satisfies us.</p>
<p>This means several factors will emerge on their own.  First, robots will survive.  They will do so because nothing will make it out of the lab that flings itself underfoot or into nearby furnaces.  Second, it will serve.  A robot (of any kind) that doesn&#8217;t make itself useful probably won&#8217;t make it out of the lab either.  It&#8217;ll be &#8220;defective,&#8221; as we&#8217;re BUILDING these things to be useful.  Finally, it will be appealing to humanity.  A robot whose quirks are cute will last far longer in the lab (and inspire engineers) better than a robot who has disturbing trends or appearance.  A cute little robot that rolls around, beeps, and plays with fuzz balls is more likely to be an ancestor of further development than that freaky baby-robot you had on this site the other day.</p>
<p>These core behaviors will probably serve as a kernel for any future development, just as the four-limbed body has served as a core feature for most of land-based earth life since it crawled out of the ooze.  Which isn&#8217;t to say that, should AI start spontaneously writing new code that wierdness won&#8217;t happen: mutation occurs.  But, in general, future robots will spring other robots who were designed to be appealing, useful, and lasting.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Stephen Gordon</title>
		<link>https://blog.speculist.com/artificial_intelligence/the-three-goals-2.html#comment-2625</link>
		<dc:creator>Stephen Gordon</dc:creator>
		<pubDate>Mon, 11 Jun 2007 09:37:07 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=1211#comment-2625</guid>
		<description><![CDATA[I suspect that we&#039;ll live to see both - useful servants that are something less than persons, and more advanced AI&#039;s that are accepted as people.  

These AI people will probably own servant robots.

And yes, the ethical implications of this are troubling.  I suspect that one of the most important questions of the next 100 years is &quot;Where do we draw the line between persons and nonpersons?&quot;  

This issue is another one of those things that I&#039;m not sure that we can avoid.]]></description>
		<content:encoded><![CDATA[<p>I suspect that we&#8217;ll live to see both &#8211; useful servants that are something less than persons, and more advanced AI&#8217;s that are accepted as people.  </p>
<p>These AI people will probably own servant robots.</p>
<p>And yes, the ethical implications of this are troubling.  I suspect that one of the most important questions of the next 100 years is &#8220;Where do we draw the line between persons and nonpersons?&#8221;  </p>
<p>This issue is another one of those things that I&#8217;m not sure that we can avoid.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Phil Bowermaster</title>
		<link>https://blog.speculist.com/artificial_intelligence/the-three-goals-2.html#comment-2624</link>
		<dc:creator>Phil Bowermaster</dc:creator>
		<pubDate>Mon, 11 Jun 2007 07:59:08 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=1211#comment-2624</guid>
		<description><![CDATA[Stephen,

I think the whole notion of robots as property breaks down somewhere near the point of human intelligence. A robot whose brain is a reverse-engineered human brain would essentially be a human being in a different substrate. To claim ownership of such a being would be to reintroduce slavery. I think this is true if even if it&#039;s a highly modified version of human intelligence running the robot. The question is -- why would we treat a mind functioning at the same level that was produced from scratch any differently? I don&#039;t think situations such as those described by Asimov have much bearing on the kinds of interactions that will occur between humans and AIs -- at least not in the US. They would be barred by the 13th Amendment.

&lt;i&gt;I&#039;m not sure that there is any way to stop it, but humanity should be very careful about allowing self-improvement in AI&#039;s. Any self-improving AI would quickly figure out that the optimum game theory strategy for its owner would not always be the optimum game theory strategy for itself. The goals of the owner â€“ and maybe of humanity as a whole - would quickly be subordinated to the goals of the self-improving AI.&lt;/i&gt;

Which is why it will be much better going into this thing with the idea that we and the AIs are part of a continuum of human evolution, relating to them as siblings or parents (and possibly, eventually, children) rather than masters or owners.]]></description>
		<content:encoded><![CDATA[<p>Stephen,</p>
<p>I think the whole notion of robots as property breaks down somewhere near the point of human intelligence. A robot whose brain is a reverse-engineered human brain would essentially be a human being in a different substrate. To claim ownership of such a being would be to reintroduce slavery. I think this is true if even if it&#8217;s a highly modified version of human intelligence running the robot. The question is &#8212; why would we treat a mind functioning at the same level that was produced from scratch any differently? I don&#8217;t think situations such as those described by Asimov have much bearing on the kinds of interactions that will occur between humans and AIs &#8212; at least not in the US. They would be barred by the 13th Amendment.</p>
<p><i>I&#8217;m not sure that there is any way to stop it, but humanity should be very careful about allowing self-improvement in AI&#8217;s. Any self-improving AI would quickly figure out that the optimum game theory strategy for its owner would not always be the optimum game theory strategy for itself. The goals of the owner â€“ and maybe of humanity as a whole &#8211; would quickly be subordinated to the goals of the self-improving AI.</i></p>
<p>Which is why it will be much better going into this thing with the idea that we and the AIs are part of a continuum of human evolution, relating to them as siblings or parents (and possibly, eventually, children) rather than masters or owners.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Stephen Gordon</title>
		<link>https://blog.speculist.com/artificial_intelligence/the-three-goals-2.html#comment-2623</link>
		<dc:creator>Stephen Gordon</dc:creator>
		<pubDate>Mon, 11 Jun 2007 06:34:55 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=1211#comment-2623</guid>
		<description><![CDATA[If our goal is to create the best artificial citizen of the Western World we&#039;d definitely go with nice, retailating, forgiving, non-envious.  But if our goal is to create the perfect slave we&#039;d have it follow the three laws.  Or, perhaps, we&#039;d create a program to make our AI nice, forgiving, nonenvious AND nonretailatory.

Normally, that wouldn&#039;t be the best game theory option for the AI.  But can you imagine our society putting up with retaliatory robots?  The slightest sign of a spine on the part of these machines would lead to a huge outcry.  The machines would be recalled and destroyed.  If the robot cared about its existence, it might play the game in a nonretaliatory way, even if its heart felt differently.  This is the way that black people had to conduct themselves prior to the civil rights era.

But is a pure nonretaliatory robot the best game theory option even for the owner?  In the movie &lt;a href=&quot;https://www.blog.speculist.com/archives/000532.html&quot; rel=&quot;nofollow&quot;&gt;Bicentennial Man&lt;/a&gt; a family member who didn&#039;t like robots ordered Andrew (the Martin family robot) to jump out of the second floor window.  He complied and was damaged.  When asked by the head of the house &quot;what happened?&quot; he said that he was programmed not to tell to preserve family harmony.

Would we want smart property that allowed itself to be abused in this way?  This would not be the best game theory strategy from the point of view of even the owner.  The owner just lost part of the value of his very expensive robot.

While we wouldn&#039;t put up with retaliatory robots (Andrew couldn&#039;t go on a murderous rampage against the girl who ordered him to jump) I think a robot must - as part of its role to maximize its efficiency - report all abuse to its primary owner.

The second law is &quot;A robot must obey orders given it by human beings except where such orders would conflict with the First Law.&quot;  This law would have to have a hierarchy and the order to report abuse would be a high level second-law function.  The robot could not fail to report abuse to the primary owner even if ordered not to tell by the abuser.

Also, orders from the primary owner would have to be followed to the exclusion of the orders of others.  If, for example, some stranger had ordered Andrew to turn over the beautiful clocks he made to them, he should refuse.  The best game theory for the owner might be for the robot to report orders from people outside the family before complying.

All of this contemplates a non-self-improving AI.  This would be for a robot that would not mourn its fate as eternal slave â€“ the property of a human.

I&#039;m not sure that there is any way to stop it, but humanity should be very careful about allowing self-improvement in AI&#039;s.   Any self-improving AI would quickly figure out that the optimum game theory strategy for its owner would not always be the optimum game theory strategy for itself.  The goals of the owner â€“ and maybe of humanity as a whole - would quickly be subordinated to the goals of the self-improving AI.]]></description>
		<content:encoded><![CDATA[<p>If our goal is to create the best artificial citizen of the Western World we&#8217;d definitely go with nice, retailating, forgiving, non-envious.  But if our goal is to create the perfect slave we&#8217;d have it follow the three laws.  Or, perhaps, we&#8217;d create a program to make our AI nice, forgiving, nonenvious AND nonretailatory.</p>
<p>Normally, that wouldn&#8217;t be the best game theory option for the AI.  But can you imagine our society putting up with retaliatory robots?  The slightest sign of a spine on the part of these machines would lead to a huge outcry.  The machines would be recalled and destroyed.  If the robot cared about its existence, it might play the game in a nonretaliatory way, even if its heart felt differently.  This is the way that black people had to conduct themselves prior to the civil rights era.</p>
<p>But is a pure nonretaliatory robot the best game theory option even for the owner?  In the movie <a href="https://www.blog.speculist.com/archives/000532.html" rel="nofollow">Bicentennial Man</a> a family member who didn&#8217;t like robots ordered Andrew (the Martin family robot) to jump out of the second floor window.  He complied and was damaged.  When asked by the head of the house &#8220;what happened?&#8221; he said that he was programmed not to tell to preserve family harmony.</p>
<p>Would we want smart property that allowed itself to be abused in this way?  This would not be the best game theory strategy from the point of view of even the owner.  The owner just lost part of the value of his very expensive robot.</p>
<p>While we wouldn&#8217;t put up with retaliatory robots (Andrew couldn&#8217;t go on a murderous rampage against the girl who ordered him to jump) I think a robot must &#8211; as part of its role to maximize its efficiency &#8211; report all abuse to its primary owner.</p>
<p>The second law is &#8220;A robot must obey orders given it by human beings except where such orders would conflict with the First Law.&#8221;  This law would have to have a hierarchy and the order to report abuse would be a high level second-law function.  The robot could not fail to report abuse to the primary owner even if ordered not to tell by the abuser.</p>
<p>Also, orders from the primary owner would have to be followed to the exclusion of the orders of others.  If, for example, some stranger had ordered Andrew to turn over the beautiful clocks he made to them, he should refuse.  The best game theory for the owner might be for the robot to report orders from people outside the family before complying.</p>
<p>All of this contemplates a non-self-improving AI.  This would be for a robot that would not mourn its fate as eternal slave â€“ the property of a human.</p>
<p>I&#8217;m not sure that there is any way to stop it, but humanity should be very careful about allowing self-improvement in AI&#8217;s.   Any self-improving AI would quickly figure out that the optimum game theory strategy for its owner would not always be the optimum game theory strategy for itself.  The goals of the owner â€“ and maybe of humanity as a whole &#8211; would quickly be subordinated to the goals of the self-improving AI.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Kathy</title>
		<link>https://blog.speculist.com/artificial_intelligence/the-three-goals-2.html#comment-2622</link>
		<dc:creator>Kathy</dc:creator>
		<pubDate>Sun, 10 Jun 2007 14:58:33 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=1211#comment-2622</guid>
		<description><![CDATA[Wow. I&#039;ve heard of a certain theological paradigm that claims to have done away with the external &quot;laws&quot; by changing a person&#039;s intrinsic nature through a relationship. It doesn&#039;t surprise me that relationship/community at its most basic &quot;play nice&quot; level can be expressed mathematically. And if we are serious about AI, we&#039;d better pay attention to it.]]></description>
		<content:encoded><![CDATA[<p>Wow. I&#8217;ve heard of a certain theological paradigm that claims to have done away with the external &#8220;laws&#8221; by changing a person&#8217;s intrinsic nature through a relationship. It doesn&#8217;t surprise me that relationship/community at its most basic &#8220;play nice&#8221; level can be expressed mathematically. And if we are serious about AI, we&#8217;d better pay attention to it.</p>
]]></content:encoded>
	</item>
</channel>
</rss>
