<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: Intelligence and Consciousness</title>
	<atom:link href="https://blog.speculist.com/artificial_intelligence/intelligence-an.html/feed" rel="self" type="application/rss+xml" />
	<link>https://blog.speculist.com/artificial_intelligence/intelligence-an.html</link>
	<description>Live to see it.</description>
	<lastBuildDate>Thu, 16 Dec 2021 08:21:00 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.6.1</generator>
	<item>
		<title>By: Will Brown</title>
		<link>https://blog.speculist.com/artificial_intelligence/intelligence-an.html#comment-3139</link>
		<dc:creator>Will Brown</dc:creator>
		<pubDate>Sun, 18 Nov 2007 13:20:17 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=1392#comment-3139</guid>
		<description><![CDATA[Phil

I have no problems with a boy named &quot;Sue&quot;, but apparently my blog does.  

Regardless, we have success in your own name and my reply is probably more extensive then you anticipate.]]></description>
		<content:encoded><![CDATA[<p>Phil</p>
<p>I have no problems with a boy named &#8220;Sue&#8221;, but apparently my blog does.  </p>
<p>Regardless, we have success in your own name and my reply is probably more extensive then you anticipate.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Matt Duing</title>
		<link>https://blog.speculist.com/artificial_intelligence/intelligence-an.html#comment-3138</link>
		<dc:creator>Matt Duing</dc:creator>
		<pubDate>Sun, 18 Nov 2007 03:17:54 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=1392#comment-3138</guid>
		<description><![CDATA[Phil,

Coincidentally, I&#039;m reading Godel, Escher, Bach right now. I might comment furher on this when I&#039;m finished.]]></description>
		<content:encoded><![CDATA[<p>Phil,</p>
<p>Coincidentally, I&#8217;m reading Godel, Escher, Bach right now. I might comment furher on this when I&#8217;m finished.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Phil Bowermaster</title>
		<link>https://blog.speculist.com/artificial_intelligence/intelligence-an.html#comment-3137</link>
		<dc:creator>Phil Bowermaster</dc:creator>
		<pubDate>Sat, 17 Nov 2007 14:16:45 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=1392#comment-3137</guid>
		<description><![CDATA[I left two comments, but they were from my wife&#039;s account, so the name shown on them was &quot;Sue.&quot; I wonder why they didn&#039;t show up?]]></description>
		<content:encoded><![CDATA[<p>I left two comments, but they were from my wife&#8217;s account, so the name shown on them was &#8220;Sue.&#8221; I wonder why they didn&#8217;t show up?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Will Brown</title>
		<link>https://blog.speculist.com/artificial_intelligence/intelligence-an.html#comment-3136</link>
		<dc:creator>Will Brown</dc:creator>
		<pubDate>Sat, 17 Nov 2007 13:04:24 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=1392#comment-3136</guid>
		<description><![CDATA[Phil

You have, or you will?  Nothing yet.  Do I have a comment problem to solve?]]></description>
		<content:encoded><![CDATA[<p>Phil</p>
<p>You have, or you will?  Nothing yet.  Do I have a comment problem to solve?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Phil Bowermaster</title>
		<link>https://blog.speculist.com/artificial_intelligence/intelligence-an.html#comment-3135</link>
		<dc:creator>Phil Bowermaster</dc:creator>
		<pubDate>Sat, 17 Nov 2007 06:45:37 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=1392#comment-3135</guid>
		<description><![CDATA[Will,

Interesting stuff. I have responded at your blog.


Matt,

Certainly no more rambling than the post itself! :-) I have been reading Douglas Hofstadter&#039;s book &lt;a href=&quot;http://www.amazon.com/gp/product/0465030785?ie=UTF8&amp;tag=thespeculist-20&amp;linkCode=xm2&amp;camp=1789&amp;creativeASIN=0465030785&quot; rel=&quot;nofollow&quot;&gt;I Am a Strange Loop&lt;/a&gt;, where he gets into this issue of consciousness attached to greater and lesser intelligence. He describes a mosquito as having a consciousness just barely above that of a thermostat. But even a thermostat is &quot;more&quot; conscious than the floating ball mechanism in a flush toilet! Consciousness shows up &lt;i&gt;somewhere&lt;/i&gt; on a continuum between the most basic stimulus response and &lt;i&gt;cogito ergo sum.&lt;/i&gt; But where?]]></description>
		<content:encoded><![CDATA[<p>Will,</p>
<p>Interesting stuff. I have responded at your blog.</p>
<p>Matt,</p>
<p>Certainly no more rambling than the post itself! <img src='https://blog.speculist.com/wp-includes/images/smilies/icon_smile.gif' alt=':-)' class='wp-smiley' />  I have been reading Douglas Hofstadter&#8217;s book <a href="http://www.amazon.com/gp/product/0465030785?ie=UTF8&#038;tag=thespeculist-20&#038;linkCode=xm2&#038;camp=1789&#038;creativeASIN=0465030785" rel="nofollow">I Am a Strange Loop</a>, where he gets into this issue of consciousness attached to greater and lesser intelligence. He describes a mosquito as having a consciousness just barely above that of a thermostat. But even a thermostat is &#8220;more&#8221; conscious than the floating ball mechanism in a flush toilet! Consciousness shows up <i>somewhere</i> on a continuum between the most basic stimulus response and <i>cogito ergo sum.</i> But where?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Matt Duing</title>
		<link>https://blog.speculist.com/artificial_intelligence/intelligence-an.html#comment-3134</link>
		<dc:creator>Matt Duing</dc:creator>
		<pubDate>Sat, 17 Nov 2007 02:05:10 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=1392#comment-3134</guid>
		<description><![CDATA[That was a great post with many interesting points Phil. I agree that artificial consciousness is likely easier than AGI. I think that humans are well above the minimum threshold for consciousness. I attribute consciousness to horses and elephants but not general intelligence. I think that a non-consciousness general intelligence may be possible and that it might be a good idea to try for that the first time that AGI is designed. I would be surprised if a biological example of a non-conscious intelligence exists anywhere in the universe, though. Perhaps distinguishing between self-referential and non-sel-referential forms of consciousness might be meaningful. WRT Mitchell Howe&#039;s point I once heard a scientist whose name escapes me say that consciousness is what processing information feels like. From this viewpoint I suppose that one could argue that a toaster is slightly more conscious than a rock. On the other hand this might stretch the concept to the point of meaninglessness. As for determining if a given AI is conscious, I agree that you just can&#039;t ask it. It would be trivially easy today to write a computer program that outputs &quot;I&#039;m not conscious.&quot; and said AI could just be doing the same thing, or it could be lying. If an AI exhibits the kind of behaviors that cause us to attribute consciousness to each other, then I would err on the side of caution.
 You&#039;re referring to the Contingency Handler from Diaspora, right? I highly recommend that book to anyone who hasn&#039;t read it. I found the ending moving in a way that I can&#039;t quite put my finger on.
I think that a large class of labor robots need be nothing more than sophisticated roombas. An AI would have to have the emotional capability to be unhappy with its orders programmed in. I do not ever expect to see a robot labor strike. It might be possible to create a slave mind whose only goal is to satisfy your every whim and be perfectly happy doing it, that does not make it moral. Many of us, after all, object to being gene replication machines. Not every source of happiness is a valid one, and determining which are is another challenge that we will face. I&#039;ll stop now. I hope that wasn&#039;t too rambling. ;)]]></description>
		<content:encoded><![CDATA[<p>That was a great post with many interesting points Phil. I agree that artificial consciousness is likely easier than AGI. I think that humans are well above the minimum threshold for consciousness. I attribute consciousness to horses and elephants but not general intelligence. I think that a non-consciousness general intelligence may be possible and that it might be a good idea to try for that the first time that AGI is designed. I would be surprised if a biological example of a non-conscious intelligence exists anywhere in the universe, though. Perhaps distinguishing between self-referential and non-sel-referential forms of consciousness might be meaningful. WRT Mitchell Howe&#8217;s point I once heard a scientist whose name escapes me say that consciousness is what processing information feels like. From this viewpoint I suppose that one could argue that a toaster is slightly more conscious than a rock. On the other hand this might stretch the concept to the point of meaninglessness. As for determining if a given AI is conscious, I agree that you just can&#8217;t ask it. It would be trivially easy today to write a computer program that outputs &#8220;I&#8217;m not conscious.&#8221; and said AI could just be doing the same thing, or it could be lying. If an AI exhibits the kind of behaviors that cause us to attribute consciousness to each other, then I would err on the side of caution.<br />
 You&#8217;re referring to the Contingency Handler from Diaspora, right? I highly recommend that book to anyone who hasn&#8217;t read it. I found the ending moving in a way that I can&#8217;t quite put my finger on.<br />
I think that a large class of labor robots need be nothing more than sophisticated roombas. An AI would have to have the emotional capability to be unhappy with its orders programmed in. I do not ever expect to see a robot labor strike. It might be possible to create a slave mind whose only goal is to satisfy your every whim and be perfectly happy doing it, that does not make it moral. Many of us, after all, object to being gene replication machines. Not every source of happiness is a valid one, and determining which are is another challenge that we will face. I&#8217;ll stop now. I hope that wasn&#8217;t too rambling. <img src='https://blog.speculist.com/wp-includes/images/smilies/icon_wink.gif' alt=';)' class='wp-smiley' /> </p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Will Brown</title>
		<link>https://blog.speculist.com/artificial_intelligence/intelligence-an.html#comment-3133</link>
		<dc:creator>Will Brown</dc:creator>
		<pubDate>Sat, 17 Nov 2007 01:05:24 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=1392#comment-3133</guid>
		<description><![CDATA[My initial response can be read &lt;a href=&quot;http://wheretheresawilliam.blogspot.com/2007/11/i-am-cause-i-say-i-am.html&quot; rel=&quot;nofollow&quot;&gt;here&lt;/a&gt;.  Not sure it&#039;s quite what you&#039;re looking for, I&#039;m afraid.]]></description>
		<content:encoded><![CDATA[<p>My initial response can be read <a href="http://wheretheresawilliam.blogspot.com/2007/11/i-am-cause-i-say-i-am.html" rel="nofollow">here</a>.  Not sure it&#8217;s quite what you&#8217;re looking for, I&#8217;m afraid.</p>
]]></content:encoded>
	</item>
</channel>
</rss>
