<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: Closer Than We Think</title>
	<atom:link href="https://blog.speculist.com/better_all_the_time/closer-than-we-1.html/feed" rel="self" type="application/rss+xml" />
	<link>https://blog.speculist.com/better_all_the_time/closer-than-we-1.html</link>
	<description>Live to see it.</description>
	<lastBuildDate>Thu, 16 Dec 2021 08:21:00 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.6.1</generator>
	<item>
		<title>By: Stephen Gordon</title>
		<link>https://blog.speculist.com/better_all_the_time/closer-than-we-1.html#comment-2405</link>
		<dc:creator>Stephen Gordon</dc:creator>
		<pubDate>Mon, 16 Apr 2007 14:13:37 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=1139#comment-2405</guid>
		<description><![CDATA[Goertzel&#039;s ideas here reminded me of Eliezer Yudkowsky&#039;s recent lecture &quot;&lt;a href=&quot;http://alfin2100.blogspot.com/2007/04/intelligence-explosion.html&quot; rel=&quot;nofollow&quot;&gt;The Intelligence Explosion&lt;/a&gt;&quot; that he gave at the Singularity Summit.

Eliezer Yudkowsky was focused on the feasibility of &lt;i&gt;Friendly&lt;/i&gt; AI.  

Some have argued that any advanced AI will shoot past us.  Therefore, there is no way for us to predict what it will do.  It could just as easily be evil as good.

This is probably oversimplified, but Yudkowsky thinks that if we can engineer a friendly AI initially, it can then set the parameters for each upgrade.  It would keep itself friendly as it improves.

An analogy, I suppose, is a good kid.  Let&#039;s say we have a somewhat immature, but nice young guy who&#039;s 13-years-old.  

Such a person will grow in intelligence and complexity, but if they are good at that time, they will not, generally, seek to get better at being evil.  Instead, they learn skills that will help them be productive and helpful in the world.

That, I think, is the hope for with general AI.  The challenge, I guess, lies in getting the &quot;newborn&quot; AI to become a decent adolescent.]]></description>
		<content:encoded><![CDATA[<p>Goertzel&#8217;s ideas here reminded me of Eliezer Yudkowsky&#8217;s recent lecture &#8220;<a href="http://alfin2100.blogspot.com/2007/04/intelligence-explosion.html" rel="nofollow">The Intelligence Explosion</a>&#8221; that he gave at the Singularity Summit.</p>
<p>Eliezer Yudkowsky was focused on the feasibility of <i>Friendly</i> AI.  </p>
<p>Some have argued that any advanced AI will shoot past us.  Therefore, there is no way for us to predict what it will do.  It could just as easily be evil as good.</p>
<p>This is probably oversimplified, but Yudkowsky thinks that if we can engineer a friendly AI initially, it can then set the parameters for each upgrade.  It would keep itself friendly as it improves.</p>
<p>An analogy, I suppose, is a good kid.  Let&#8217;s say we have a somewhat immature, but nice young guy who&#8217;s 13-years-old.  </p>
<p>Such a person will grow in intelligence and complexity, but if they are good at that time, they will not, generally, seek to get better at being evil.  Instead, they learn skills that will help them be productive and helpful in the world.</p>
<p>That, I think, is the hope for with general AI.  The challenge, I guess, lies in getting the &#8220;newborn&#8221; AI to become a decent adolescent.</p>
]]></content:encoded>
	</item>
</channel>
</rss>
