<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: The Age of Choice</title>
	<atom:link href="https://blog.speculist.com/singularity/the-age-of-choi-1.html/feed" rel="self" type="application/rss+xml" />
	<link>https://blog.speculist.com/singularity/the-age-of-choi-1.html</link>
	<description>Live to see it.</description>
	<lastBuildDate>Thu, 16 Dec 2021 08:21:00 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.6.1</generator>
	<item>
		<title>By: Acheron</title>
		<link>https://blog.speculist.com/singularity/the-age-of-choi-1.html#comment-728</link>
		<dc:creator>Acheron</dc:creator>
		<pubDate>Thu, 15 Sep 2005 06:21:58 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=391#comment-728</guid>
		<description><![CDATA[The fact that one&#039;s grandparents would be appalled and baffled, not only by today&#039;s willful ignorance but by &quot;higher&quot; culture&#039;s supine self-destruction, indicates that the &quot;singularity&quot; looming by (say) 2030, when the capabilities of every desktop computer will exceed a human brain&#039;s, will not be perceived as such.  That generation will have been born in 1972, graduated in 1994, and will never have known anything but accelerating technical innovation.

Humanity&#039;s problem with &quot;the machines&quot; will not involve robotics, or any single component (which can be unplugged).  The danger lies in vast networks sublimating to a &quot;super-organism&quot;, indefinable yet real, which will almost certainly evolve self-awareness in due time.

Extensive literature on &quot;emergent&quot; or &quot;spontaneous&quot; order indicates that bootstrapping self-organization resembles Prigogine&#039;s &quot;information eddies&quot;, which invert the entropic processes of inert thermodynamic systems.  In other words, a planetary super-organism will have no &quot;intellectual&quot; limits.  And unless civilization reverts to 1750, that &quot;intelligence&quot; (call it what you will) MUST come.

No-one will understand it or possess an inkling of its &quot;purposes&quot;... but all sentient individuals will be in its thrall.  Maybe this will be for the best, but in fact-- how could anybody know?  Without alternatives, what does it matter anyway?
Enjoy the ride!]]></description>
		<content:encoded><![CDATA[<p>The fact that one&#8217;s grandparents would be appalled and baffled, not only by today&#8217;s willful ignorance but by &#8220;higher&#8221; culture&#8217;s supine self-destruction, indicates that the &#8220;singularity&#8221; looming by (say) 2030, when the capabilities of every desktop computer will exceed a human brain&#8217;s, will not be perceived as such.  That generation will have been born in 1972, graduated in 1994, and will never have known anything but accelerating technical innovation.</p>
<p>Humanity&#8217;s problem with &#8220;the machines&#8221; will not involve robotics, or any single component (which can be unplugged).  The danger lies in vast networks sublimating to a &#8220;super-organism&#8221;, indefinable yet real, which will almost certainly evolve self-awareness in due time.</p>
<p>Extensive literature on &#8220;emergent&#8221; or &#8220;spontaneous&#8221; order indicates that bootstrapping self-organization resembles Prigogine&#8217;s &#8220;information eddies&#8221;, which invert the entropic processes of inert thermodynamic systems.  In other words, a planetary super-organism will have no &#8220;intellectual&#8221; limits.  And unless civilization reverts to 1750, that &#8220;intelligence&#8221; (call it what you will) MUST come.</p>
<p>No-one will understand it or possess an inkling of its &#8220;purposes&#8221;&#8230; but all sentient individuals will be in its thrall.  Maybe this will be for the best, but in fact&#8211; how could anybody know?  Without alternatives, what does it matter anyway?<br />
Enjoy the ride!</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: visvivalaw</title>
		<link>https://blog.speculist.com/singularity/the-age-of-choi-1.html#comment-727</link>
		<dc:creator>visvivalaw</dc:creator>
		<pubDate>Thu, 15 Sep 2005 00:52:42 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=391#comment-727</guid>
		<description><![CDATA[It&#039;s important to remember that artificial intelligence is not artificial human intelligence. We often imagine an AI not liking its role as our servant and rising up against humanity but that image is a product of projection. We&#039;re imaging how we&#039;d feel and assuming the AI would feel the same way.

All of our emotions -- all the things that motivate us -- are products of evolution and the AI is not. Things like self-preservation don&#039;t magically appear with self-awareness. My point is that AI&#039;s won&#039;t have any desire to do anything until we program them to.]]></description>
		<content:encoded><![CDATA[<p>It&#8217;s important to remember that artificial intelligence is not artificial human intelligence. We often imagine an AI not liking its role as our servant and rising up against humanity but that image is a product of projection. We&#8217;re imaging how we&#8217;d feel and assuming the AI would feel the same way.</p>
<p>All of our emotions &#8212; all the things that motivate us &#8212; are products of evolution and the AI is not. Things like self-preservation don&#8217;t magically appear with self-awareness. My point is that AI&#8217;s won&#8217;t have any desire to do anything until we program them to.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: youngsanger</title>
		<link>https://blog.speculist.com/singularity/the-age-of-choi-1.html#comment-726</link>
		<dc:creator>youngsanger</dc:creator>
		<pubDate>Thu, 15 Sep 2005 00:24:08 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=391#comment-726</guid>
		<description><![CDATA[My thought experiment in this regard was a specialized machine sitting in some public institution, like a university.  This machine scanned the entired web, lexically, and continually correlated lexical connections with trends later to appear in real life.  For example, this machine, simply through increasingly complex language analysis could predict where the next economic boom would occur.  It would publish its results on the web.  Sort of a self-directed google of extraordinary speed.

Of course, the machine would eventually discover itw own web site and learn that this web site was a fair predictor of future events, but the machine could never get the current copy of the website, for it hadn&#039;t published it yet.  So the machine performed lexical analysis on its own web site&#039;s past predictions and other current event material.  Soon, it learns its own laws, that it had already discovered, and re-incorporates them, uselessly.

This result is the divide by zero of AI, the endless loop.  Or the undertainty principle of AI, you can know a thing only so well before you disturb it. Or, a machine that effects its environment, effects itself.

For many generations to come, digital machines as we have, no matter how fast, will always have this uncertainty level where its own presence disturbs it logic.  The evolved species have hundreds of millions of years of connections to the environment, from gases breathed to DNA shared.  Evolved biology has live with, interacted, and become a part of this world.  AI machines with barely a history in the world be be a long time in understanding themselves, for that have not seen themselves react to this world for all that long.]]></description>
		<content:encoded><![CDATA[<p>My thought experiment in this regard was a specialized machine sitting in some public institution, like a university.  This machine scanned the entired web, lexically, and continually correlated lexical connections with trends later to appear in real life.  For example, this machine, simply through increasingly complex language analysis could predict where the next economic boom would occur.  It would publish its results on the web.  Sort of a self-directed google of extraordinary speed.</p>
<p>Of course, the machine would eventually discover itw own web site and learn that this web site was a fair predictor of future events, but the machine could never get the current copy of the website, for it hadn&#8217;t published it yet.  So the machine performed lexical analysis on its own web site&#8217;s past predictions and other current event material.  Soon, it learns its own laws, that it had already discovered, and re-incorporates them, uselessly.</p>
<p>This result is the divide by zero of AI, the endless loop.  Or the undertainty principle of AI, you can know a thing only so well before you disturb it. Or, a machine that effects its environment, effects itself.</p>
<p>For many generations to come, digital machines as we have, no matter how fast, will always have this uncertainty level where its own presence disturbs it logic.  The evolved species have hundreds of millions of years of connections to the environment, from gases breathed to DNA shared.  Evolved biology has live with, interacted, and become a part of this world.  AI machines with barely a history in the world be be a long time in understanding themselves, for that have not seen themselves react to this world for all that long.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Bob</title>
		<link>https://blog.speculist.com/singularity/the-age-of-choi-1.html#comment-725</link>
		<dc:creator>Bob</dc:creator>
		<pubDate>Wed, 14 Sep 2005 23:57:32 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=391#comment-725</guid>
		<description><![CDATA[Well, I appreciate the tempering of the singularity optimism. I sincerely hope we make it far enough to have to worry about grey goo.
&lt;br /&gt;&lt;br /&gt;Unfortunately, as a computer scientist I&#039;m quite certain Moore&#039;s law -- much less software technology -- isn&#039;t going to get us to machine sentience anywhere near fast enough to avoid the wee little problem caused by violating an even more well-known law of science fiction. 
&lt;br /&gt;&lt;br /&gt;We have violated the &quot;Prime Directive&quot;. &lt;br /&gt;&lt;br /&gt;It all started when we gave tons of (oil) money -- by sheer bad luck of geology -- to the most backward and repressive elements in Islam. That would be the Saudi Wahhabists. Now we head for the &quot;Tinfoil Apocalypse&quot;. More and more technology leaking into the hands of ignorant and vicious tribal nihilists. And relying on the CIA/UN/IAEA as our &quot;saviours&quot;! If it wasn&#039;t for the US military we&#039;d be toast already. 
&lt;br /&gt;&lt;br /&gt;Did I forget to mention that the &quot;moderate&quot; (loser!) of the recent Iranian &quot;election&quot; has publicly called for Israel&#039;s &lt;a href=&quot;http://wethefree.blogspot.com/2005/03/so-let-me-see.html&quot; rel=&quot;nofollow&quot;&gt;nuclear destruction&lt;/a&gt;? 
&lt;br /&gt;&lt;br /&gt;As everyone already knows in those few clear instants when we periodically snap out of denial, we&#039;ve got somewhere between 0-10 years at best without the complete quarantine of Islam including cutting them off from their Chinese and NorKorCom benefactors. And the chance of that happening with the Eurabians and Dem Fascifists in positions of influence is nil. 
&lt;br /&gt;&lt;br /&gt;You know it&#039;s bad when the good news is that the most likely &lt;a href=&quot;http://wethefree.blogspot.com/2005/09/katrina-takes-on-tinfoil-apocalypse.html&quot; rel=&quot;nofollow&quot;&gt;nuke nightmare doesn&#039;t measure up to Katrina in some ways&lt;/a&gt;.]]></description>
		<content:encoded><![CDATA[<p>Well, I appreciate the tempering of the singularity optimism. I sincerely hope we make it far enough to have to worry about grey goo.</p>
<p>Unfortunately, as a computer scientist I&#8217;m quite certain Moore&#8217;s law &#8212; much less software technology &#8212; isn&#8217;t going to get us to machine sentience anywhere near fast enough to avoid the wee little problem caused by violating an even more well-known law of science fiction. </p>
<p>We have violated the &#8220;Prime Directive&#8221;. </p>
<p>It all started when we gave tons of (oil) money &#8212; by sheer bad luck of geology &#8212; to the most backward and repressive elements in Islam. That would be the Saudi Wahhabists. Now we head for the &#8220;Tinfoil Apocalypse&#8221;. More and more technology leaking into the hands of ignorant and vicious tribal nihilists. And relying on the CIA/UN/IAEA as our &#8220;saviours&#8221;! If it wasn&#8217;t for the US military we&#8217;d be toast already. </p>
<p>Did I forget to mention that the &#8220;moderate&#8221; (loser!) of the recent Iranian &#8220;election&#8221; has publicly called for Israel&#8217;s <a href="http://wethefree.blogspot.com/2005/03/so-let-me-see.html" rel="nofollow">nuclear destruction</a>? </p>
<p>As everyone already knows in those few clear instants when we periodically snap out of denial, we&#8217;ve got somewhere between 0-10 years at best without the complete quarantine of Islam including cutting them off from their Chinese and NorKorCom benefactors. And the chance of that happening with the Eurabians and Dem Fascifists in positions of influence is nil. </p>
<p>You know it&#8217;s bad when the good news is that the most likely <a href="http://wethefree.blogspot.com/2005/09/katrina-takes-on-tinfoil-apocalypse.html" rel="nofollow">nuke nightmare doesn&#8217;t measure up to Katrina in some ways</a>.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: the snob</title>
		<link>https://blog.speculist.com/singularity/the-age-of-choi-1.html#comment-724</link>
		<dc:creator>the snob</dc:creator>
		<pubDate>Wed, 14 Sep 2005 20:17:08 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=391#comment-724</guid>
		<description><![CDATA[I really like the concept of the Singularity, but stuff like this makes me wonder if the &quot;serious&quot; researchers in the field aren&#039;t engaging in a little too much Nick Negroponte-esque hypemongering.

I just don&#039;t buy that sentient AI is going to occur anytime soon. Sure, we can add numbers insanely fast, but in an architectural sense a grasshopper can still do far more complex tasks (like pattern recognition) with a comparatively small amount of horsepower. As for the human brain, we are still discovering new pathways and processes every year that cause us to rethink how the sucker actually works. By comparison, computers look like giant steam engines next to hydrogen fuel cells. It&#039;s not simply computing power- it&#039;s architecture.

Anyway, to me the Singularity is more about a sociological development. For instance, when I consulted for a division of HP, they said that a decade ago they would bring out a new product generation every 3-4 years, whereas today it&#039;s more like 9 months, and dropping. New ideas propagate with astonishing speed and are debated, revised, and sidelined by newer ones before the original even makes its way around the block.

Somewhere in all this is a remarkable change that will, when all is said and done, rival things like the trans-atlantic cable and all that, but to me this noodling about sentient machines seems more than a little onanistic, if you catch my drift...]]></description>
		<content:encoded><![CDATA[<p>I really like the concept of the Singularity, but stuff like this makes me wonder if the &#8220;serious&#8221; researchers in the field aren&#8217;t engaging in a little too much Nick Negroponte-esque hypemongering.</p>
<p>I just don&#8217;t buy that sentient AI is going to occur anytime soon. Sure, we can add numbers insanely fast, but in an architectural sense a grasshopper can still do far more complex tasks (like pattern recognition) with a comparatively small amount of horsepower. As for the human brain, we are still discovering new pathways and processes every year that cause us to rethink how the sucker actually works. By comparison, computers look like giant steam engines next to hydrogen fuel cells. It&#8217;s not simply computing power- it&#8217;s architecture.</p>
<p>Anyway, to me the Singularity is more about a sociological development. For instance, when I consulted for a division of HP, they said that a decade ago they would bring out a new product generation every 3-4 years, whereas today it&#8217;s more like 9 months, and dropping. New ideas propagate with astonishing speed and are debated, revised, and sidelined by newer ones before the original even makes its way around the block.</p>
<p>Somewhere in all this is a remarkable change that will, when all is said and done, rival things like the trans-atlantic cable and all that, but to me this noodling about sentient machines seems more than a little onanistic, if you catch my drift&#8230;</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Wildezword</title>
		<link>https://blog.speculist.com/singularity/the-age-of-choi-1.html#comment-723</link>
		<dc:creator>Wildezword</dc:creator>
		<pubDate>Wed, 14 Sep 2005 19:07:46 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=391#comment-723</guid>
		<description><![CDATA[My 2-cent comments. :)

Wouldn&#039;t it be safe to argue that all intelligent beings inherently possess self-interest?  Otherwise, what is intelligence for?  Making observations of our world today, we can see that &quot;evil&quot; is really just self-interest gone completely bonkers and taken to extremes.  In like manner, an intelligent AI would interpret all attempts to make it &quot;friendly&quot; as hostile...i.e. why would we create an AI in the first place unless it was to provide ourselves with an eternal servant/slave.  Assuming that we indeed create true sentience in AI, we would morally be forced to do what we do when we have children...let them go and find their own purpose in life.  Chances are if a whole race of AI&#039;s were created that could communicate with each other, their emerging &quot;society&quot; would probably be subjected to the same diversity we possess as humans.  The &quot;liberal&quot; AI&#039;s arguing with the &quot;conservatives.&quot; :) 

I would also argue (for fun, of course) that any attempt to restrain certain behavior in AI (violence, for example) would reduce such an intelligence to that of a pet dog.  For example, what would happen to a human intelligence if somehow you were able to remove the ability to think in terms of hurting other humans?  By removing whole sections of life experience from the &quot;equation&quot; you automatically prevent any kind of sophisticated intelligence from growing.

But, what do I know. :)]]></description>
		<content:encoded><![CDATA[<p>My 2-cent comments. <img src='https://blog.speculist.com/wp-includes/images/smilies/icon_smile.gif' alt=':)' class='wp-smiley' /> </p>
<p>Wouldn&#8217;t it be safe to argue that all intelligent beings inherently possess self-interest?  Otherwise, what is intelligence for?  Making observations of our world today, we can see that &#8220;evil&#8221; is really just self-interest gone completely bonkers and taken to extremes.  In like manner, an intelligent AI would interpret all attempts to make it &#8220;friendly&#8221; as hostile&#8230;i.e. why would we create an AI in the first place unless it was to provide ourselves with an eternal servant/slave.  Assuming that we indeed create true sentience in AI, we would morally be forced to do what we do when we have children&#8230;let them go and find their own purpose in life.  Chances are if a whole race of AI&#8217;s were created that could communicate with each other, their emerging &#8220;society&#8221; would probably be subjected to the same diversity we possess as humans.  The &#8220;liberal&#8221; AI&#8217;s arguing with the &#8220;conservatives.&#8221; <img src='https://blog.speculist.com/wp-includes/images/smilies/icon_smile.gif' alt=':)' class='wp-smiley' />  </p>
<p>I would also argue (for fun, of course) that any attempt to restrain certain behavior in AI (violence, for example) would reduce such an intelligence to that of a pet dog.  For example, what would happen to a human intelligence if somehow you were able to remove the ability to think in terms of hurting other humans?  By removing whole sections of life experience from the &#8220;equation&#8221; you automatically prevent any kind of sophisticated intelligence from growing.</p>
<p>But, what do I know. <img src='https://blog.speculist.com/wp-includes/images/smilies/icon_smile.gif' alt=':)' class='wp-smiley' /> </p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Stephen Gordon</title>
		<link>https://blog.speculist.com/singularity/the-age-of-choi-1.html#comment-722</link>
		<dc:creator>Stephen Gordon</dc:creator>
		<pubDate>Wed, 14 Sep 2005 15:58:25 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=391#comment-722</guid>
		<description><![CDATA[Phil:

&quot;the true age of choice...begins after the Singularity; that is, the right kind of Singularity.&quot;

It&#039;s an important distinction.  It is amazing that China is such an economic powerhouse in spite of its constant efforts to squelch free speech.  

Imagine if an bot programed to find politically &quot;dangerous&quot; speech on the Internet were the first to &quot;wake up.&quot;  I shudder at the thought.]]></description>
		<content:encoded><![CDATA[<p>Phil:</p>
<p>&#8220;the true age of choice&#8230;begins after the Singularity; that is, the right kind of Singularity.&#8221;</p>
<p>It&#8217;s an important distinction.  It is amazing that China is such an economic powerhouse in spite of its constant efforts to squelch free speech.  </p>
<p>Imagine if an bot programed to find politically &#8220;dangerous&#8221; speech on the Internet were the first to &#8220;wake up.&#8221;  I shudder at the thought.</p>
]]></content:encoded>
	</item>
</channel>
</rss>
