<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: The Three Goals of  Robotics</title>
	<atom:link href="https://blog.speculist.com/artificial_intelligence/the-three-goals.html/feed" rel="self" type="application/rss+xml" />
	<link>https://blog.speculist.com/artificial_intelligence/the-three-goals.html</link>
	<description>Live to see it.</description>
	<lastBuildDate>Thu, 16 Dec 2021 08:21:00 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.6.1</generator>
	<item>
		<title>By: Karl Hallowell</title>
		<link>https://blog.speculist.com/artificial_intelligence/the-three-goals.html#comment-2551</link>
		<dc:creator>Karl Hallowell</dc:creator>
		<pubDate>Thu, 24 May 2007 05:19:30 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=1185#comment-2551</guid>
		<description><![CDATA[&lt;i&gt;how rulebased systems an table lookups were subsumed by genetic algorithms and relational databases in software.&lt;/i&gt;

matoko, relational databases are tabled databases by definition, hence, one does table lookups when performing a variety of relational database operations (or at least it appears that way to the user). I gather object oriented or functional databases (the latter is the current version of rules-based systems) are state of the art these days, but I haven&#039;t been keeping up.

Second, genetic algorithms (GAs) solve a sort of problem mostly orthogonal to anything else you mention. Ie, optimization problems. At its most related, I imagine one could use GAs to optimize queries and other high level operations on the database. Search optimization is a serious problem and poor searching methods can slow down database queries by a considerable amount.]]></description>
		<content:encoded><![CDATA[<p><i>how rulebased systems an table lookups were subsumed by genetic algorithms and relational databases in software.</i></p>
<p>matoko, relational databases are tabled databases by definition, hence, one does table lookups when performing a variety of relational database operations (or at least it appears that way to the user). I gather object oriented or functional databases (the latter is the current version of rules-based systems) are state of the art these days, but I haven&#8217;t been keeping up.</p>
<p>Second, genetic algorithms (GAs) solve a sort of problem mostly orthogonal to anything else you mention. Ie, optimization problems. At its most related, I imagine one could use GAs to optimize queries and other high level operations on the database. Search optimization is a serious problem and poor searching methods can slow down database queries by a considerable amount.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Larry</title>
		<link>https://blog.speculist.com/artificial_intelligence/the-three-goals.html#comment-2550</link>
		<dc:creator>Larry</dc:creator>
		<pubDate>Wed, 23 May 2007 06:58:14 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=1185#comment-2550</guid>
		<description><![CDATA[Asimov&#039;s laws focus on individual encounters. Your goals are more like a constitution.

Since sentience is a matter of degree, at what point do the desires/needs of robots become of equal value to those of humans? I.e., at what point does society emancipate them? And after that point, does it become criminal to produce a less-than-maximally sentient robot in the same way that it would be to induce brain defects in a fetus?]]></description>
		<content:encoded><![CDATA[<p>Asimov&#8217;s laws focus on individual encounters. Your goals are more like a constitution.</p>
<p>Since sentience is a matter of degree, at what point do the desires/needs of robots become of equal value to those of humans? I.e., at what point does society emancipate them? And after that point, does it become criminal to produce a less-than-maximally sentient robot in the same way that it would be to induce brain defects in a fetus?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Phil Bowermaster</title>
		<link>https://blog.speculist.com/artificial_intelligence/the-three-goals.html#comment-2549</link>
		<dc:creator>Phil Bowermaster</dc:creator>
		<pubDate>Mon, 21 May 2007 21:01:22 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=1185#comment-2549</guid>
		<description><![CDATA[Michael A. -

&lt;i&gt;What would work better would be transferring over the moral complexity that you used to make up these goals in the first place.&lt;/i&gt;

Actually, that&#039;s kind of where I&#039;m going with this. The more I look at these goals, the more I think they&#039;re really goals for &lt;i&gt;us.&lt;/i&gt; If we play our cards right (and something like CEV might get us there) these might eventually be goals we share with these new intelligences. It makes me wonder...have we (humanity or individuals) had some version of CEV that we&#039;ve been running all along?]]></description>
		<content:encoded><![CDATA[<p>Michael A. -</p>
<p><i>What would work better would be transferring over the moral complexity that you used to make up these goals in the first place.</i></p>
<p>Actually, that&#8217;s kind of where I&#8217;m going with this. The more I look at these goals, the more I think they&#8217;re really goals for <i>us.</i> If we play our cards right (and something like CEV might get us there) these might eventually be goals we share with these new intelligences. It makes me wonder&#8230;have we (humanity or individuals) had some version of CEV that we&#8217;ve been running all along?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Phil Bowermaster</title>
		<link>https://blog.speculist.com/artificial_intelligence/the-three-goals.html#comment-2548</link>
		<dc:creator>Phil Bowermaster</dc:creator>
		<pubDate>Mon, 21 May 2007 19:28:07 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=1185#comment-2548</guid>
		<description><![CDATA[Well, class, I just don&#039;t know what I&#039;m going to do with you. I&#039;m afraid I&#039;m going to have to keep you all after school so we can spend some time &lt;i&gt;trying&lt;/i&gt; to understand the difference between rules and goals...not the same thing, I&#039;m afraid, and I was about as adamant as I could possibly be as to which one I was writing about. Matoko, you have to stay later for your disrespectful attitude towards other students. But, hey, it could be worse -- at least I&#039;m not counting off for spelling and punctuation!

BTW, not everyone agrees that recursive self-improvement alone will get us there. In the piece by Eliezer Yudkowsky that Michael Anissimov linked above, Eliezer argues that recursive self-improvement on its own might prove very dangerous.]]></description>
		<content:encoded><![CDATA[<p>Well, class, I just don&#8217;t know what I&#8217;m going to do with you. I&#8217;m afraid I&#8217;m going to have to keep you all after school so we can spend some time <i>trying</i> to understand the difference between rules and goals&#8230;not the same thing, I&#8217;m afraid, and I was about as adamant as I could possibly be as to which one I was writing about. Matoko, you have to stay later for your disrespectful attitude towards other students. But, hey, it could be worse &#8212; at least I&#8217;m not counting off for spelling and punctuation!</p>
<p>BTW, not everyone agrees that recursive self-improvement alone will get us there. In the piece by Eliezer Yudkowsky that Michael Anissimov linked above, Eliezer argues that recursive self-improvement on its own might prove very dangerous.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: matoko kusanagi</title>
		<link>https://blog.speculist.com/artificial_intelligence/the-three-goals.html#comment-2547</link>
		<dc:creator>matoko kusanagi</dc:creator>
		<pubDate>Mon, 21 May 2007 18:38:11 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=1185#comment-2547</guid>
		<description><![CDATA[phil read the literature please before commenting on Friendly AI.
your rulebased system is already obsolete.]]></description>
		<content:encoded><![CDATA[<p>phil read the literature please before commenting on Friendly AI.<br />
your rulebased system is already obsolete.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: matoko kusanagi</title>
		<link>https://blog.speculist.com/artificial_intelligence/the-three-goals.html#comment-2546</link>
		<dc:creator>matoko kusanagi</dc:creator>
		<pubDate>Mon, 21 May 2007 18:36:16 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=1185#comment-2546</guid>
		<description><![CDATA[dumb, michael
phil&#039;s rulebased system is woefully obsolete already.
read my linkage.
it is like......how rulebased systems an table lookups were subsumed by genetic algorithms and relational databases in software.
welcome to the 21st century, dudes.]]></description>
		<content:encoded><![CDATA[<p>dumb, michael<br />
phil&#8217;s rulebased system is woefully obsolete already.<br />
read my linkage.<br />
it is like&#8230;&#8230;how rulebased systems an table lookups were subsumed by genetic algorithms and relational databases in software.<br />
welcome to the 21st century, dudes.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Michael S. Sargent</title>
		<link>https://blog.speculist.com/artificial_intelligence/the-three-goals.html#comment-2545</link>
		<dc:creator>Michael S. Sargent</dc:creator>
		<pubDate>Mon, 21 May 2007 10:07:47 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=1185#comment-2545</guid>
		<description><![CDATA[Phil,

I&#039;m afraid I have to side with TJIC with regard to the priority structure embodied in the Goals as presented.

The sort of guidance we are envisioning must, at the most fundamental level, be logically exclusive and compelling or the AI in question could &#039;reason&#039; its way around the strictures imposed (much like Asimov&#039;s &quot;Zeroth Law&quot; robots) and, eventually, the value of the Goals in limiting AI evolution to the high-power, high-controllability track is lost.

Each Goal must have a clear and unbreakable priority over the others that follow it and thus, in the order stated, collective continuity trumps individual safety (&quot;The needs of the many outweigh the needs of the few, or the one.&quot;), individual safety (broadly construed, &#039;stasis&#039;) trumps individual liberty (&#039;free will&#039;), and happiness (&#039;utility&#039;, a notoriously slippery concept for economists and philosophers to get a firm intellectual grip on) trumps both individual liberty and individual well-being (allowing potentially self-destructive behavior on the individual level insofar as that behavior doesn&#039;t exceed the standard established for &#039;safety&#039; in Goal 2).

Whole series of books could be written whose plots hinge on the resolution of internal conflicts among the Goals and creative resolutions thereof, but, as currently constituted, they lead, logically, resolutely, and inescapably (if they have the &#039;teeth&#039; necessary to prevent a sufficiently creative or evolutionary AI from eventually transforming into the powerful, uncontroallable, and ultimately selfish and dominating, type) to the ultimate expression of the &#039;&lt;a href=&quot;http://en.wikipedia.org/wiki/Precautionary_principle&quot; rel=&quot;nofollow&quot;&gt;precautionary principle&lt;/a&gt;&#039; and an eventual &#039;lotus eater&#039; static state for all sentients.]]></description>
		<content:encoded><![CDATA[<p>Phil,</p>
<p>I&#8217;m afraid I have to side with TJIC with regard to the priority structure embodied in the Goals as presented.</p>
<p>The sort of guidance we are envisioning must, at the most fundamental level, be logically exclusive and compelling or the AI in question could &#8216;reason&#8217; its way around the strictures imposed (much like Asimov&#8217;s &#8220;Zeroth Law&#8221; robots) and, eventually, the value of the Goals in limiting AI evolution to the high-power, high-controllability track is lost.</p>
<p>Each Goal must have a clear and unbreakable priority over the others that follow it and thus, in the order stated, collective continuity trumps individual safety (&#8220;The needs of the many outweigh the needs of the few, or the one.&#8221;), individual safety (broadly construed, &#8216;stasis&#8217;) trumps individual liberty (&#8216;free will&#8217;), and happiness (&#8216;utility&#8217;, a notoriously slippery concept for economists and philosophers to get a firm intellectual grip on) trumps both individual liberty and individual well-being (allowing potentially self-destructive behavior on the individual level insofar as that behavior doesn&#8217;t exceed the standard established for &#8216;safety&#8217; in Goal 2).</p>
<p>Whole series of books could be written whose plots hinge on the resolution of internal conflicts among the Goals and creative resolutions thereof, but, as currently constituted, they lead, logically, resolutely, and inescapably (if they have the &#8216;teeth&#8217; necessary to prevent a sufficiently creative or evolutionary AI from eventually transforming into the powerful, uncontroallable, and ultimately selfish and dominating, type) to the ultimate expression of the &#8216;<a href="http://en.wikipedia.org/wiki/Precautionary_principle" rel="nofollow">precautionary principle</a>&#8216; and an eventual &#8216;lotus eater&#8217; static state for all sentients.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: matoko kusanagi</title>
		<link>https://blog.speculist.com/artificial_intelligence/the-three-goals.html#comment-2544</link>
		<dc:creator>matoko kusanagi</dc:creator>
		<pubDate>Mon, 21 May 2007 06:31:14 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=1185#comment-2544</guid>
		<description><![CDATA[phil...the concept u are reaching for is called recursive self-improvement.
mathematically it is semi-rigorous, and there has been quite a body of wvrk done on it.  ;)
heres a link.
http://quantumghosts.blogspot.com/2006/09/friendly-ai-possible-friendly.html]]></description>
		<content:encoded><![CDATA[<p>phil&#8230;the concept u are reaching for is called recursive self-improvement.<br />
mathematically it is semi-rigorous, and there has been quite a body of wvrk done on it.  <img src='https://blog.speculist.com/wp-includes/images/smilies/icon_wink.gif' alt=';)' class='wp-smiley' /><br />
heres a link.<br />
<a href="http://quantumghosts.blogspot.com/2006/09/friendly-ai-possible-friendly.html" rel="nofollow">http://quantumghosts.blogspot.com/2006/09/friendly-ai-possible-friendly.html</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Vadept</title>
		<link>https://blog.speculist.com/artificial_intelligence/the-three-goals.html#comment-2543</link>
		<dc:creator>Vadept</dc:creator>
		<pubDate>Mon, 21 May 2007 04:31:19 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=1185#comment-2543</guid>
		<description><![CDATA[The &quot;Higher intelligence, dumber world&quot; comes from two things, I think.  The first, as one columnist pointed out, comes from the increasing sophistication of the world.  You need to know more to get by in the world than ever before, and each deficiency is a glaring flaw that makes you look &quot;dumb.&quot;

The second &quot;problem&quot; is the ever expanding scope of conversation.  We have more memes, more participating individuals than ever before.  And any forumite can tell you that the intelligence of his preferred forum is inversley related to the number of people posting there, ie, the more people post, the more you&#039;re likely to run into a jackass.  Apply this to the news and internet writ large.

Of course, for every moron, you can meet a genius, and as a result, our world is advancing rapidly and things like Yahoo Answers or Wikipedia are helping more people than ever &quot;figure things out,&quot; but people tend to notice the stupidity more than the brilliance, so everything &quot;looks dumb.&quot;]]></description>
		<content:encoded><![CDATA[<p>The &#8220;Higher intelligence, dumber world&#8221; comes from two things, I think.  The first, as one columnist pointed out, comes from the increasing sophistication of the world.  You need to know more to get by in the world than ever before, and each deficiency is a glaring flaw that makes you look &#8220;dumb.&#8221;</p>
<p>The second &#8220;problem&#8221; is the ever expanding scope of conversation.  We have more memes, more participating individuals than ever before.  And any forumite can tell you that the intelligence of his preferred forum is inversley related to the number of people posting there, ie, the more people post, the more you&#8217;re likely to run into a jackass.  Apply this to the news and internet writ large.</p>
<p>Of course, for every moron, you can meet a genius, and as a result, our world is advancing rapidly and things like Yahoo Answers or Wikipedia are helping more people than ever &#8220;figure things out,&#8221; but people tend to notice the stupidity more than the brilliance, so everything &#8220;looks dumb.&#8221;</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Michael Anissimov</title>
		<link>https://blog.speculist.com/artificial_intelligence/the-three-goals.html#comment-2542</link>
		<dc:creator>Michael Anissimov</dc:creator>
		<pubDate>Sun, 20 May 2007 15:17:22 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=1185#comment-2542</guid>
		<description><![CDATA[What would work better would be transferring over the moral complexity that you used to make up these goals in the first place.

Also, as you point out, these goals are vague.  More specific and useful from a programmer&#039;s perspective would be some kind of algorithm that takes human preferences as inputs and outputs actions that practically everyone sees as reasonable and benevolent.  Hard to do, obviously, but CEV (http://www.singinst.org/upload/CEV.html) is one attempt.]]></description>
		<content:encoded><![CDATA[<p>What would work better would be transferring over the moral complexity that you used to make up these goals in the first place.</p>
<p>Also, as you point out, these goals are vague.  More specific and useful from a programmer&#8217;s perspective would be some kind of algorithm that takes human preferences as inputs and outputs actions that practically everyone sees as reasonable and benevolent.  Hard to do, obviously, but CEV (<a href="http://www.singinst.org/upload/CEV.html" rel="nofollow">http://www.singinst.org/upload/CEV.html</a>) is one attempt.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Richard Nieporent</title>
		<link>https://blog.speculist.com/artificial_intelligence/the-three-goals.html#comment-2541</link>
		<dc:creator>Richard Nieporent</dc:creator>
		<pubDate>Sun, 20 May 2007 11:27:35 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=1185#comment-2541</guid>
		<description><![CDATA[&lt;i&gt;A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.&lt;/i&gt;

&lt;i&gt;Many have pointed out that this law essentially enslaves the robots.&lt;/i&gt;

I find that statement to be ironic because as it turned out in the Asimov novels it was the humans that were being controlled by the robot R. Daneel Olivaw.]]></description>
		<content:encoded><![CDATA[<p><i>A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.</i></p>
<p><i>Many have pointed out that this law essentially enslaves the robots.</i></p>
<p>I find that statement to be ironic because as it turned out in the Asimov novels it was the humans that were being controlled by the robot R. Daneel Olivaw.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Phil Bowermaster</title>
		<link>https://blog.speculist.com/artificial_intelligence/the-three-goals.html#comment-2540</link>
		<dc:creator>Phil Bowermaster</dc:creator>
		<pubDate>Sun, 20 May 2007 08:16:50 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=1185#comment-2540</guid>
		<description><![CDATA[TJIC,

Freedom might be crushed in a political context where these were the goals, but only because survival/safety is adopted as the only actual value and freedom just becomes lip service.

I think it&#039;s telling that the Declaration of Independence talks about Life, Liberty, and the Pursuit of Happiness. In that order. Sheesh, what a New Dealer / Nanny-Stater that Thomas Jefferson was, eh? :-)

If we promote any one of those values to the exclusion of the others, we have a problem. But if  we strive to define all three in such a way as to work together, maybe we&#039;ll start closing in on something.]]></description>
		<content:encoded><![CDATA[<p>TJIC,</p>
<p>Freedom might be crushed in a political context where these were the goals, but only because survival/safety is adopted as the only actual value and freedom just becomes lip service.</p>
<p>I think it&#8217;s telling that the Declaration of Independence talks about Life, Liberty, and the Pursuit of Happiness. In that order. Sheesh, what a New Dealer / Nanny-Stater that Thomas Jefferson was, eh? <img src='https://blog.speculist.com/wp-includes/images/smilies/icon_smile.gif' alt=':-)' class='wp-smiley' /> </p>
<p>If we promote any one of those values to the exclusion of the others, we have a problem. But if  we strive to define all three in such a way as to work together, maybe we&#8217;ll start closing in on something.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Stephen Gordon</title>
		<link>https://blog.speculist.com/artificial_intelligence/the-three-goals.html#comment-2539</link>
		<dc:creator>Stephen Gordon</dc:creator>
		<pubDate>Sun, 20 May 2007 08:09:14 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=1185#comment-2539</guid>
		<description><![CDATA[&lt;i&gt;&quot;as natural intelligence appears to be in diminishing supply.&quot;&lt;/i&gt;

Glenn had just been dealing with a specific example of idiocy - Andrew Sullivan.  Of course a single example doesn&#039;t prove a trend.]]></description>
		<content:encoded><![CDATA[<p><i>&#8220;as natural intelligence appears to be in diminishing supply.&#8221;</i></p>
<p>Glenn had just been dealing with a specific example of idiocy &#8211; Andrew Sullivan.  Of course a single example doesn&#8217;t prove a trend.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: tjic</title>
		<link>https://blog.speculist.com/artificial_intelligence/the-three-goals.html#comment-2538</link>
		<dc:creator>tjic</dc:creator>
		<pubDate>Sun, 20 May 2007 07:58:27 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=1185#comment-2538</guid>
		<description><![CDATA[I&#039;d reverse the order of &quot;ensure survival of individual sentients&quot; and &quot;ensure freedom of individual sentients&quot;.  Leaving them in this order would result in a nanny society, where freedom was ruthlessly crushed, in favor of perfect safety (sort of like having Democrats in power, but worse).]]></description>
		<content:encoded><![CDATA[<p>I&#8217;d reverse the order of &#8220;ensure survival of individual sentients&#8221; and &#8220;ensure freedom of individual sentients&#8221;.  Leaving them in this order would result in a nanny society, where freedom was ruthlessly crushed, in favor of perfect safety (sort of like having Democrats in power, but worse).</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Brian Doherty</title>
		<link>https://blog.speculist.com/artificial_intelligence/the-three-goals.html#comment-2537</link>
		<dc:creator>Brian Doherty</dc:creator>
		<pubDate>Sun, 20 May 2007 07:00:13 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=1185#comment-2537</guid>
		<description><![CDATA[The key, I think, is to avoid making the machines self replicating.  Otherwise, any protections against them becoming selfish will eventually be for naught, as copying errors become subject to the principles of evolution.]]></description>
		<content:encoded><![CDATA[<p>The key, I think, is to avoid making the machines self replicating.  Otherwise, any protections against them becoming selfish will eventually be for naught, as copying errors become subject to the principles of evolution.</p>
]]></content:encoded>
	</item>
</channel>
</rss>
