<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: Energy-Efficient Robots</title>
	<atom:link href="https://blog.speculist.com/robotics/energyefficient.html/feed" rel="self" type="application/rss+xml" />
	<link>https://blog.speculist.com/robotics/energyefficient.html</link>
	<description>Live to see it.</description>
	<lastBuildDate>Thu, 16 Dec 2021 08:21:00 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.6.1</generator>
	<item>
		<title>By: Karl Hallowell</title>
		<link>https://blog.speculist.com/robotics/energyefficient.html#comment-281</link>
		<dc:creator>Karl Hallowell</dc:creator>
		<pubDate>Sun, 20 Feb 2005 08:06:30 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=221#comment-281</guid>
		<description><![CDATA[Energy is cheap. That&#039;s not the factor that&#039;s going to cause humans to be obselete. When it becomes cheaper to use a robot than to hire sufficient humans, then that&#039;ll be the time to switch. Needless to say, that point arrived already for a number of applications. General manual labor is still some distance away.&lt;p&gt;

Steve, much of the analysis on &quot;3 Laws Unsafe&quot; isn&#039;t that good. I have yet to see anything there that tops Asimov&#039;s idea. Simple even simplistic beats complex systems that you can&#039;t guarantee. And frankly a lot of the arguments there suffer from simplistic reasoning just as well. For example, Michael Anissimov &lt;a href=&quot;http://www.asimovlaws.com/articles/archives/2004/07/deconstructing.html&quot; rel=&quot;nofollow&quot;&gt;advocates&lt;/a&gt; allowing robots to evolve uncontrolled except for initial instructions. Even assuming that the robots couldn&#039;t evolve away the initial instructions, it&#039;s still a tremendous gamble that the unintended consequences won&#039;t be too dire.
&lt;/p&gt;]]></description>
		<content:encoded><![CDATA[<p>Energy is cheap. That&#8217;s not the factor that&#8217;s going to cause humans to be obselete. When it becomes cheaper to use a robot than to hire sufficient humans, then that&#8217;ll be the time to switch. Needless to say, that point arrived already for a number of applications. General manual labor is still some distance away.
<p>Steve, much of the analysis on &#8220;3 Laws Unsafe&#8221; isn&#8217;t that good. I have yet to see anything there that tops Asimov&#8217;s idea. Simple even simplistic beats complex systems that you can&#8217;t guarantee. And frankly a lot of the arguments there suffer from simplistic reasoning just as well. For example, Michael Anissimov <a href="http://www.asimovlaws.com/articles/archives/2004/07/deconstructing.html" rel="nofollow">advocates</a> allowing robots to evolve uncontrolled except for initial instructions. Even assuming that the robots couldn&#8217;t evolve away the initial instructions, it&#8217;s still a tremendous gamble that the unintended consequences won&#8217;t be too dire.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Engineer-Poet</title>
		<link>https://blog.speculist.com/robotics/energyefficient.html#comment-280</link>
		<dc:creator>Engineer-Poet</dc:creator>
		<pubDate>Sat, 19 Feb 2005 09:53:53 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=221#comment-280</guid>
		<description><![CDATA[It may be possible to become more energy-efficient than humans using hopping motions, but it&#039;s hard to see how order-of-magnitude improvements could be made without going to wheels.

I don&#039;t see why energy efficiency is all that important, save for the military; most robots in industrial or domestic settings will have power readily available, and their engineering can concentrate on other useful attributes.

(Incidentally, the preview window claims that I am no longer logged in, and won&#039;t let me post without going back. Strange.)]]></description>
		<content:encoded><![CDATA[<p>It may be possible to become more energy-efficient than humans using hopping motions, but it&#8217;s hard to see how order-of-magnitude improvements could be made without going to wheels.</p>
<p>I don&#8217;t see why energy efficiency is all that important, save for the military; most robots in industrial or domestic settings will have power readily available, and their engineering can concentrate on other useful attributes.</p>
<p>(Incidentally, the preview window claims that I am no longer logged in, and won&#8217;t let me post without going back. Strange.)</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Phil Bowermaster</title>
		<link>https://blog.speculist.com/robotics/energyefficient.html#comment-279</link>
		<dc:creator>Phil Bowermaster</dc:creator>
		<pubDate>Fri, 18 Feb 2005 15:30:44 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=221#comment-279</guid>
		<description><![CDATA[&lt;a href=&quot;http://www.asimovlaws.com/about/&quot; rel=&quot;nofollow&quot;&gt;Michael Anissimov&lt;/a&gt; says that the three laws are too simplisitc ever to be coded into robots.]]></description>
		<content:encoded><![CDATA[<p><a href="http://www.asimovlaws.com/about/" rel="nofollow">Michael Anissimov</a> says that the three laws are too simplisitc ever to be coded into robots.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Stephen Gordon</title>
		<link>https://blog.speculist.com/robotics/energyefficient.html#comment-278</link>
		<dc:creator>Stephen Gordon</dc:creator>
		<pubDate>Fri, 18 Feb 2005 15:22:25 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=221#comment-278</guid>
		<description><![CDATA[Something I&#039;ve often wondered - maybe some software guru can help with this - how are the three laws going to be written into robots?

In these early days of robotics it will have to be simpler than some overiding positronic routine or whatever.  These early robots aren&#039;t the sentient machines that Asimov envisoned.  It might just be simple common sense safety precautions.

I saw a program the other day about industrial robots.  One large car-building robot was working fast and furiously within a chain-link cage.  The moment the technition unlatched the gate to the cage, all work ceased.  That big machine could hurt somebody had it continued working with someone inside.

That industrial safeguard was &quot;Law One&quot; in action.

Here&#039;s the laws:

http://www.auburn.edu/~vestmon/robotics.html

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.]]></description>
		<content:encoded><![CDATA[<p>Something I&#8217;ve often wondered &#8211; maybe some software guru can help with this &#8211; how are the three laws going to be written into robots?</p>
<p>In these early days of robotics it will have to be simpler than some overiding positronic routine or whatever.  These early robots aren&#8217;t the sentient machines that Asimov envisoned.  It might just be simple common sense safety precautions.</p>
<p>I saw a program the other day about industrial robots.  One large car-building robot was working fast and furiously within a chain-link cage.  The moment the technition unlatched the gate to the cage, all work ceased.  That big machine could hurt somebody had it continued working with someone inside.</p>
<p>That industrial safeguard was &#8220;Law One&#8221; in action.</p>
<p>Here&#8217;s the laws:</p>
<p><a href="http://www.auburn.edu/~vestmon/robotics.html" rel="nofollow">http://www.auburn.edu/~vestmon/robotics.html</a></p>
<p>1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.</p>
<p>2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.</p>
<p>3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.</p>
]]></content:encoded>
	</item>
</channel>
</rss>
