<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: Michael Anissimov: Luddite</title>
	<atom:link href="https://blog.speculist.com/artificial_intelligence/michael-anissim-1.html/feed" rel="self" type="application/rss+xml" />
	<link>https://blog.speculist.com/artificial_intelligence/michael-anissim-1.html</link>
	<description>Live to see it.</description>
	<lastBuildDate>Thu, 16 Dec 2021 08:21:00 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.6.1</generator>
	<item>
		<title>By: Tom</title>
		<link>https://blog.speculist.com/artificial_intelligence/michael-anissim-1.html#comment-4624</link>
		<dc:creator>Tom</dc:creator>
		<pubDate>Thu, 11 Jun 2009 06:40:50 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=1908#comment-4624</guid>
		<description><![CDATA[Third hand = Gripping Hand]]></description>
		<content:encoded><![CDATA[<p>Third hand = Gripping Hand</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Nick Tarleton</title>
		<link>https://blog.speculist.com/artificial_intelligence/michael-anissim-1.html#comment-4623</link>
		<dc:creator>Nick Tarleton</dc:creator>
		<pubDate>Tue, 09 Jun 2009 18:27:24 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=1908#comment-4623</guid>
		<description><![CDATA[&lt;blockquote&gt;Widespread supercomputers also enable careful AI reserachers&lt;/blockquote&gt;

Not really; friendly AI is overwhelmingly a philosophical/mathematical/logical problem. As a rough analogy, the ability to produce bigger explosions doesn&#039;t, above a certain very low threshold, make it easier to safely demolish a building. There are probably world-savingly useful things that very careful people could do with huge computing power, but (given wide distribution) these are probably completely outweighed by the danger of uncareful AI.]]></description>
		<content:encoded><![CDATA[<blockquote><p>Widespread supercomputers also enable careful AI reserachers</p></blockquote>
<p>Not really; friendly AI is overwhelmingly a philosophical/mathematical/logical problem. As a rough analogy, the ability to produce bigger explosions doesn&#8217;t, above a certain very low threshold, make it easier to safely demolish a building. There are probably world-savingly useful things that very careful people could do with huge computing power, but (given wide distribution) these are probably completely outweighed by the danger of uncareful AI.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Jonathan</title>
		<link>https://blog.speculist.com/artificial_intelligence/michael-anissim-1.html#comment-4622</link>
		<dc:creator>Jonathan</dc:creator>
		<pubDate>Tue, 09 Jun 2009 17:54:54 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=1908#comment-4622</guid>
		<description><![CDATA[I agree that we need to work on the Friendly AGI problem, but bitching and griping about faster computers isn&#039;t very constructive.]]></description>
		<content:encoded><![CDATA[<p>I agree that we need to work on the Friendly AGI problem, but bitching and griping about faster computers isn&#8217;t very constructive.</p>
]]></content:encoded>
	</item>
</channel>
</rss>
