<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>The Speculist &#187; Speaking of the Future</title>
	<atom:link href="https://blog.speculist.com/category/speaking_of_the_future/feed" rel="self" type="application/rss+xml" />
	<link>https://blog.speculist.com</link>
	<description>Live to see it.</description>
	<lastBuildDate>Thu, 25 Jul 2019 23:07:25 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.6.1</generator>
		<item>
		<title>Speaking of the Future</title>
		<link>https://blog.speculist.com/speaking_of_the_future/speaking-of-the-futur.html</link>
		<comments>https://blog.speculist.com/speaking_of_the_future/speaking-of-the-futur.html#comments</comments>
		<pubDate>Tue, 11 Dec 2012 21:47:04 +0000</pubDate>
		<dc:creator>Phil Bowermaster</dc:creator>
				<category><![CDATA[FastForward Radio]]></category>
		<category><![CDATA[Speaking of the Future]]></category>

		<guid isPermaLink="false">https://blog.speculist.com/?p=3912</guid>
		<description><![CDATA[You, too, can be a Speculist! Hosts Phil Bowermaster and Stephen Gordon provide a quick introduction to putting together your own group dedicated to talking about accelerating change, abundance, the Technological Singularity, and related topics. They will discuss: The Top 10 Hot Topics About the Future The Three Big Changes Everybody Wants to Discuss Finding [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><img class="alignright  wp-image-3913" title="smallgroup" src="https://blog.speculist.com/wp-content/uploads/2012/12/smallgroup.png" alt="" width="236" height="220" />You, too, can be a Speculist! Hosts Phil Bowermaster and Stephen Gordon provide a quick introduction to putting together your own group dedicated to talking about accelerating change, abundance, the Technological Singularity, and related topics. They will discuss:</p>
<p style="padding-left: 30px;">The Top 10 Hot Topics About the Future</p>
<p style="padding-left: 30px;">The Three Big Changes Everybody Wants to Discuss</p>
<p style="padding-left: 30px;">Finding Resources to Support Your New Group</p>
<p style="padding-left: 30px;">Finding Members and Getting Started</p>
<p>Be there:</p>
<p><a href="http://www.blogtalkradio.com/fastforwardradio/2012/12/13/speaking-of-the-future--fastforward-radio">Wednesday, 12 December 2012 7:00 PM PST / 10 PM EST</a></p>
<p><a href="http://www.blogtalkradio.com/fastforwardradio/2012/12/13/speaking-of-the-future--fastforward-radio"><img title="newlogo" src="https://blog.speculist.com/wp-content/uploads/2012/11/newlogo.png" alt="" width="298" height="299" /></a></p>
<div>
<p>Joining us for this discussion will be Stuart Seligman, who is currently organizing a local discussion group inspired by FastForward Radio. A long time listener to Fast Forward  Radio, Stuart has had an abiding interest in Science and technology since as a kid he propped a telescope up in his backyard with his father, scanned the night sky and tried to figure out how he could get up there. Stuart started his career as a wall street economic analyst and wound up in structured finance. He developed an expertise in using neural networks and genetic algorithms to solve financial problems. After working for money center banks and small investment firms, Stuart became an entrepreneur and was involved in starting several startups. His new group will be meeting at the Monmouth County Library , Monmouth  County, New Jersey and will discuss the scientific and technological changes that are occurring, the opportunities that they represent, and how perhaps to take advantage of them.</p>
</div>
]]></content:encoded>
			<wfw:commentRss>https://blog.speculist.com/speaking_of_the_future/speaking-of-the-futur.html/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>If I Live to Be 100</title>
		<link>https://blog.speculist.com/speaking_of_the_future/if-i-live-to-be.html</link>
		<comments>https://blog.speculist.com/speaking_of_the_future/if-i-live-to-be.html#comments</comments>
		<pubDate>Tue, 15 May 2007 05:43:52 +0000</pubDate>
		<dc:creator>Phil Bowermaster</dc:creator>
				<category><![CDATA[Libraries]]></category>
		<category><![CDATA[Scenarios]]></category>
		<category><![CDATA[Speaking of the Future]]></category>

		<guid isPermaLink="false">http://localhost/specblog/?p=1181</guid>
		<description><![CDATA[Here&#8217;s the first of three videos that I am putting together from the Mid-Atlantic Library Futures Conference which I attended last week. This one has responses to one of the Seven Questions About the Future:]]></description>
				<content:encoded><![CDATA[<p>Here&#8217;s the first of three videos that I am putting together from the Mid-Atlantic Library Futures Conference which I attended last week. This one has responses to one of the <a href="http://www.speculist.com/archives/000019.html">Seven Questions About the Future</a>:</p>
<p><center> <object width="425" height="350"><param name="movie" value="http://www.youtube.com/v/hc6o2QHbmjM"></param><param name="wmode" value="transparent"></param><embed src="https://www.youtube.com/v/hc6o2QHbmjM" type="application/x-shockwave-flash" wmode="transparent" width="425" height="350"></embed></object> </center></p>
]]></content:encoded>
			<wfw:commentRss>https://blog.speculist.com/speaking_of_the_future/if-i-live-to-be.html/feed</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
		<item>
		<title>Future of Libraries</title>
		<link>https://blog.speculist.com/blogging/future-of-libra-1.html</link>
		<comments>https://blog.speculist.com/blogging/future-of-libra-1.html#comments</comments>
		<pubDate>Thu, 03 May 2007 17:59:33 +0000</pubDate>
		<dc:creator>Phil Bowermaster</dc:creator>
				<category><![CDATA[Blogging]]></category>
		<category><![CDATA[Choose Your Future]]></category>
		<category><![CDATA[Evolution]]></category>
		<category><![CDATA[Speaking of the Future]]></category>

		<guid isPermaLink="false">http://localhost/specblog/?p=1163</guid>
		<description><![CDATA[I will be attending this conference Monday and Tuesday of next week: For two days in May, three hundred librarians will meet with visionaries from the disciplines of anthropology, architecture, public policy and science to discuss the future of libraries. By looking outside of the library, we seek to explore unique ideas that will make [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>I will be attending <a href="http://www.palinet.org/futures/malfuturesconference.aspx">this conference</a> Monday and Tuesday of next week:<br />
<blockquote>
<p>For two days in May, three hundred librarians will meet with visionaries from the disciplines of anthropology, architecture, public policy and science to discuss the future of libraries. By looking outside of the library, we seek to explore unique ideas that will make the difference. Imagine merging information, inspiration and imagination to transform the way we look at our future. And then working together to build a solid foundation that will serve as a concrete plan with which to move forward.</p></blockquote>
<p>The theme of the conference is an evocative one: Imagination to Transformation. Or if I may paraphrase: &#8220;live to see it.&#8221;</p>
<p>Speakers at the conference include <a href="http://www.kurzweilai.net/Fortuneprofile/Furtuneprofile.pdf">Ray Kurzweil</a>, <a href="http://www.marycatherinebateson.com/">Mary Catherine Bateson</a>, <a href="http://www.trendtalk.com/">Bob Treadway</a>, and others. Should be a fascinating couple of days.</p>
<p>Part of the program will also involve an exercise derived from our <a href="http://www.speculist.com/archives/000019.html">Seven Questions About the Future</a>. Remember those? We had <a href="http://www.speculist.com/archives/000065.html">a lot</a> <a href="http://www.speculist.com/archives/000624.html">of great</a> <a href="http://www.speculist.com/archives/000168.html">responses</a> <a href="http://www.speculist.com/archives/000152.html">back in</a> <a href="http://www.speculist.com/archives/000135.html">the day</a>.</p>
<p>I saw a good show on PBS last night about an a villa excavated some time ago in Herculaneum (Pompeii&#8217;s upscale neighbor) where a library of more than 1800 ancient manuscripts was found, each one rolled up tight and toasted by Vesuvius.  The efforts of scholars over the past couple hundred years to unroll (much less to read) these ancient books have been nothing short of heroic. There was initially hope that a lost tragedy of Sophocles or dialog of Plato might be found among these books; so far no such luck. But as modern chemistry makes it easier to unroll them, and new imaging technology makes it easier (and in many cases, possible) to read some part of them, we are learning quite a bit about the school of Epicurean philosophy to which they apparently belonged.</p>
<p><center> <a href="http://www.beazley.ox.ac.uk/OCRPG/script/ImagingPapyri.htm"><img alt="papyrus.jpg" src="https://www.blog.speculist.com/archives/papyrus.jpg" width="270" height="250" /><br />
</a></p>
<p><em>One of the papyri from Herculaneum</em></center></p>
<p>When picturing the library of the future, it&#8217;s hard not imagine some kind of Google interface connecting everything ever published to everything else ever published via logical, cognitive, and semantic linking schemes that we can hardly imagine now. But I think the tireless efforts to decipher these burnt manuscripts give us another hint as to the role that libraries will continue to play. Libraries aren&#8217;t just collections of books &#8212; they are a link with the past. When ancient books such as these are found, it&#8217;s as though some piece of the past that was lost has been restored to us.</p>
<p>This is also why the destruction of a great library &#8212; such as occurred in <a href="http://en.wikipedia.org/wiki/Library_of_Alexandria#Destruction_of_the_Library">Alexandria</a> at some point 1500-1800 years ago &#8212; represents such a tremendous loss. It&#8217;s as though some part of the past has been blotted out.</p>
<p>Libraries are the original databases and the original time machines. It will be very interesting spending a couple of days getting a handle on where libraries are going &#8212; and how in the future they will be even more effective at showing us where we&#8217;ve been.</p>
]]></content:encoded>
			<wfw:commentRss>https://blog.speculist.com/blogging/future-of-libra-1.html/feed</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
		<item>
		<title>Doomsday Clock Speculist Challenge</title>
		<link>https://blog.speculist.com/speaking_of_the_future/doomsday-clock.html</link>
		<comments>https://blog.speculist.com/speaking_of_the_future/doomsday-clock.html#comments</comments>
		<pubDate>Tue, 23 Jan 2007 23:59:59 +0000</pubDate>
		<dc:creator>Stephen Gordon</dc:creator>
				<category><![CDATA[Better All The Time]]></category>
		<category><![CDATA[Speaking of the Future]]></category>

		<guid isPermaLink="false">http://localhost/specblog/?p=1055</guid>
		<description><![CDATA[[We're moving this entry back to the top to give those who haven't had a chance yet to tell us where you think the minute had should be on the Doomsday Clock. Come on, we know you've been thinking about it... For those who have been following the development of this post, please be aware [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>[<em>We're moving this entry back to the top to give those who haven't had a chance yet to tell us where you think the minute had should be on the Doomsday Clock. Come on, we know you've been thinking about it...</em>  For those who have been following the development of this post, please be aware that two Updates have been added to Kathy's original.]</p>
<p><img alt="dali.jpeg" src="https://www.blog.speculist.com/archives/dali.jpeg" width="107" height="86" style="margin: 0px 5px 5px 0px; float: left" /><br />
The <a href="http://en.wikipedia.org/wiki/Doomsday_clock">Doomsday Clock</a>, based at the University of Chicago, has been ticking off the metaphorical minutes until apocalyptic midnight since the beginning of the cold war between the U.S. and former Soviet Union&#8211;circa 1947. In those days, the threat of the U.S.S.R. launching nuclear weapons kept school children hunkered under their desks, practicing bomb drills as naively as they did tornado and fire drills.</p>
<p>The demise of the Soviet Union didn&#8217;t stop the Clock, however.  Its keepers, the <a href="http://www.thebulletin.org/">Bulletin of Atomic Scientists</a> (also Chicao-based) keep it calibrated to the changing face of the treats to global survival. Since 2002, for example, the clock has been set at seven minutes to midnight.</p>
<p>Some purists might argue that the Bulletin is straying too far from its traditional message on nuclear issues. On Jan 17, 2007, the Doomsday Clock was set to five minutes to midnight, and the Bulletin issued this statement:</p>
<blockquote><p>&#8220;We stand at the brink of a second nuclear age. Not since the first atomic bombs were dropped on Hiroshima and Nagasaki has the world faced such perilous choices. North Koreaâ€™s recent test of a nuclear weapon, Iranâ€™s nuclear ambitions, a renewed U.S. emphasis on the military utility of nuclear weapons, the failure to adequately secure nuclear materials, and the continued presence of some 26,000 nuclear weapons in the United States and Russia are symptomatic of a larger failure to solve the problems posed by the most destructive technology on Earth.</p>
<p>As in past deliberations, we have examined other human-made threats to civilization. We have concluded that the dangers posed by climate change are nearly as dire as those posed by nuclear weapons. The effects may be less dramatic in the short term than the destruction that could be wrought by nuclear explosions, but over the next three to four decades climate change could cause drastic harm to the habitats upon which human societies depend for survival.</p>
<p>This deteriorating state of global affairs leads the Board of Directors of the Bulletin of the Atomic Scientists&#8211;in consultation with a Board of Sponsors that includes 18 Nobel laureates&#8211;to move the minute hand of the â€œDoomsday Clockâ€ from seven to five minutes to midnight. &#8220;</p></blockquote>
<p>Could there be mitigating factors the Bulletin scientists didn&#8217;t include in their calibrations?  In the spirit of Stephen and Phil&#8217;s lyrical response to the Doomsday argument,  I am hereby issuing a challenge: the formation of an ad hoc Bulletin of Speculists to present an alternative setting for the minute hand of the Doomsday Clock.</p>
<p><strong>UPDATE FROM STEPHEN:</strong></p>
<p>Of course the Doomsday clock is not a measurement of the actual risk of the world coming to an end.  It&#8217;s always been a measurement of the nervousness of atomic scientists that a major invention of their field will end the world.    I suspect politics has played its part.  Looking over this Doomsday Clock graph I see a rough correlation between the setting of the clock and the party holding the U.S. Presidency:</p>
<p><center><a href="https://www.blog.speculist.com/archives/600px-Doomsday_Clock_graph.html" onclick="window.open('https://www.blog.speculist.com/archives/600px-Doomsday_Clock_graph.html','popup','width=600,height=203,scrollbars=no,resizable=no,toolbar=no,directories=no,location=no,menubar=no,status=no,left=0,top=0'); return false"><img alt="600px-Doomsday_Clock_graph.jpg" src="https://www.blog.speculist.com/archives/600px-Doomsday_Clock_graph.jpg" width="400" height="135" /></a></p>
<p><i>click for a larger image</i></center></p>
<p>Here&#8217;s the official announcement of this latest move:</p>
<blockquote><p><a href="http://en.wikipedia.org/wiki/Doomsday_clock">The Bulletin of Atomic Scientists</a> (BAS) will move the minute hand of the &#8220;Doomsday Clock&#8221; on January 17, 2007 [to 5 minutes to midnight]&#8230;the first such change to the Clock since February 2002. The major new step reflects growing concerns about a &#8220;Second Nuclear Age&#8221; marked by grave threats, including: nuclear ambitions in Iran and North Korea, unsecured nuclear materials in Russia and elsewhere, the continuing &#8220;launch-ready&#8221; status of 2,000 of the 25,000 nuclear weapons held by the U.S. and Russia, escalating terrorism, and new pressure from climate change for expanded civilian nuclear power that could increase proliferation risks.</p></blockquote>
<p>One problem is that the BAS is trying to set the clock for two different things â€“ actual Doomsday (which I would take to be the end of the world â€“ at least for humans), and nuclear war.</p>
<p>Though it isn&#8217;t politically correct to say so, a regional nuclear war would not be the end of humanity.   It doesn&#8217;t even mean the rise of some weird post-apocalyptic Mad Max world.   Our civilization would plod on after a nuclear war between India and Pakistan or between Israel and Iran.  If the U.S. got nuked by a terrorist group it would probably be a single bomb destroying a single city â€“ probably a major city.  Our economy would be devastated, but our civilization would limp away and then eventually charge back.  Ditto on a nuclear attack on U.S. interests from North Korea.</p>
<p>The risk of regional nuclear wars (which includes the risk of a nuclear terrorist attack) has risen as the risk of a big nuclear exchange (the kind that would endanger the human race as a whole) has diminished.</p>
<p>We had some close calls during the Cold War.  Everybody knows about the Cuban missile crisis (although you couldn&#8217;t tell it from the setting of the Doomsday Clock at the time).   Few know how close we came to going out <a href=" http://www.damninteresting.com/?p=63">in 1983</a>.</p>
<p>But we made it through somehow.</p>
<p>So, Kathy, I think mitigating factors that those BAS guys aren&#8217;t considering include the following:
<ol>
<li>Regional nuclear war doesn&#8217;t = doomsday.
<li>Mutually assured destruction persists as a deterrent for a bigger nuclear war.
<li>The global economy has continued to grow as a factor that decreases tensions between the big nuclear players.   China may not like us politically, but they do like having a market for their products.
<li>It&#8217;s been assumed that mutually assured destruction wouldn&#8217;t deter regional nuclear war.   I&#8217;m not so sure.  The nuts that run Iran and North Korea might actually value their lives.  Hard to say.
<li>Another politically incorrect opinion â€“ The War on Terror is working.   The end of this war isn&#8217;t in sight, but the terrorists have been too busy with the full-time job of surviving us to attack us directly.</ol>
<p><img alt="doomsday-clock.jpg" src="https://www.blog.speculist.com/archives/doomsday-clock.jpg" width="100" height="100" align="right" hspace="5" vspace="5"/>As for where I set the clock, I think that we should first agree on where the lowest and highest risk should be set.  There&#8217;s no rule against setting the <a href="http://en.wikipedia.org/wiki/Doomsday_clock">Doomsday clock</a> outside of 15:00 to midnight (in 1991 the clock was set to 17 minutes to midnight).  But the Doomsday clock graphic seems to suggest that as long as there is an existential risk, the clock should be set somewhere between 11:45 (lowest risk) and midnight (Doomsday).</p>
<p>But we need more clocks.  One &#8220;nuclear war&#8221; clock could reflect the risk of nuclear war in whatever form.   That, I&#8217;d argue, is what the BAS Doomsday Clock has become.  It&#8217;s not a human extinction clock, it&#8217;s a nuclear-war-of-all-kinds clock.  That being the case, I&#8217;d say the current setting of 5 minutes to midnight sounds about right.  The risk of regional nuclear war is high.</p>
<p>The risk of human extinction from nuclear war is much less.  If I was setting a clock for that I&#8217;d put it at 11:47 &#8211; just a little higher than the minimum risk.  Of course if we actually do have a regional nuclear war, this nuclear extinction clock would shoot close to midnight.  There is just no telling how this country would react if New York or Washington were wiped out.  Let me put it this way â€“ the War on Terror is also a war to save the rest of the world from our response to such an attack.</p>
<p>A third clock could reflect all existential risks.  Risks like <a href=" https://www.blog.speculist.com/archives/000286.html ">these</a>.</p>
<p>I&#8217;d set that clock at 7 minutes to midnight.</p>
<p><strong>UPDATE FROM MICHAEL:</strong></p>
<p>Some of the following (which originated in a &#8216;backchannel&#8217; email exchange among the bloggers here at the Speculist), recapitulates facts and concepts already touched on by Kathy and Stephen.  To the extent that our readers find any such repetition distasteful, I offer an apology in advance.  However, I think that if these relevant facts are not laid out in these particular terms, it might damage the foundation of the ideas I&#8217;d like to discuss.</p>
<p><i>Begin email extract&#8230;</i></p>
<p>The greatest value to the &#8220;Doomsday Clock&#8221; (besides as a marketing campaign by the Bulletin of the Atomic Scientists, and a fairly successful one) is as a fairly visible metaphor for, and estimate of, one sort of existential risk.  When the Clock was initially conceived and presented, that existential risk was fairly clearly defined: Global Thermonuclear War between the only two social constructs capable of engaging in that activity.  First between the United States of America and he Union of Soviet Socialist Republics, then among blocs of nation-states either capable of independently developing nuclear weapons or granted them as clients of such independently-capable states.</p>
<p>With first the departure of France&#8217;s independently-acquired nuclear arsenal (The &#8220;Force de Frappe&#8221; lit. &#8216;Strike Force&#8217;) from the direct control of either the North Atlantic Treaty Organization or the Warsaw Pact and then the subsequent advent of nuclear arsenals, admitted or supposed, under the control of nation-states either unaligned with or more-or-less loosely associated with the original adversaries, the definition of the risk being characterized by the Clock expanded and became much more complex.  While the fundamental risk remained Global Thermonuclear War prosecuted by one or both of the only nation-states capable of accomplishing such a civilization-threatening feat (single-handedly or &#8216;cooperatively&#8217;), the contributing risks represented by escalation and alliances opened a larger number of paths from the status quo to the unthinkable outcome and some of those paths had distinctly lower thresholds standing between origin and outcome.  A number of writers in the period made believable projections of these paths their stock-in-trade.</p>
<p>As the definition of the chain of risk evaluated by the Clock evolved and expanded, its utility as a widely-accepted summary estimate of the probability of the fundamental existential risk became diluted.</p>
<p>After the reductions in overall capacity by the relevant &#8216;players&#8217; in the 1980&#8242;s, and the dissolution of the Soviet Union in the &#8217;90&#8242;s, the risk evaluated by the Clock, and thus its overall relevance, was diminished considerably.  Other risks, global and potentially existential (catastrophic meteorite impact, pandemic disease, climate changes, collapse of the global socioeconomic infrastructure due to insufficient forward planning &#8220;Y2K&#8221;) or &#8220;merely&#8221; regional (local famine, &#8216;brushfire&#8217; conflicts formerly closely confined by their implications for the balance between superpowers) increased in their relative importance and/or attention regardless of any change in their independent</p>
<p>In order to capitalize on the significant stock of intellectual and moral authority at their disposal, and not least to continue selling their publication in the market in the wake of such decreased attention, the Educational Foundation for Nuclear Science (parent organization of the Bulletin) extended the factors considered by the Doomsday Clock.  The recently-publicized addition of &#8216;global warning&#8217; as a sufficiently-significant influence as to notably move the measure in the direction of greater certainty is only the most proximate and public acknowledgement of this evolution.</p>
<p>HOWEVER: The fundamental concept of the Clock was, and remains, to place an easily-understood and widely agreed upon value on the probability of one or more large-scale risks to society. As the exact risk being evaluated becomes less-closely defined the ease of understanding, wide acceptability, and thus the utility of the value is diminished.</p>
<p>In the intervening years since the establishment of the Clock, and particularly since the turn of the millennium, the understanding of risk and means by which it can be rationally estimated, even without a great concentration of highly-accurate information in the hands of a few analysts, have been developed.  One such means of understanding and evaluation, leveraging recent improvements in communications and computing technologies, is the concept of Prediction Markets.  (Addressed <a href="http://www.speculist.com/archives/000040.html ">early on</a> in the Speculist: ) Evolved out of insurance underwriting, actuarial science, and futures markets, and attempting to capture information held by, but not necessarily shared among, a wide sample of observers and enlist economic self-interest to drive what might otherwise be a largely- or completely-altruistic exposure of limited information and expertise, such markets operate best (make the most accurate, and profitable, predictions) when the risks being evaluated are clearly and concisely defined in time, space, and scope.  In formulating such a risk prediction, or &#8216;contract&#8217;, one would frame the risk to be evaluated in specific and measurable terms.  The Bulletin&#8217;s Clock, in attempting to remain relevant in light of changing patterns of risk, has sacrificed the specificity necessary to be a reliable estimate of those risks even if its value is established by a large, knowledgeable, subset of the population.</p>
<p>While there is considerable room for debate as to whether it is more or less ethical to frame such risk predictions in specific human terms versus less emotionally-charged, but less specific, purely-economic terms, the more specific the prediction, the more precisely the risks attending to the event(s) or outcome(s) predicted may be estimated.</p>
<p>Perhaps an example would be appropriate:</p>
<blockquote><p>&#8220;One million or more people dwelling in Sub-Saharan Africa will die of starvation and malnutrition by (on or before) 12 &#8216;o&#8217; clock midnight, December 31st, 2007 as verified by official population and mortality statistics and estimates published by the World Health Organization not later than December 31st, 2008&#8243;</p></blockquote>
<p>is a far more specific statement of risk than a commodities futures contract that says, in effect:</p>
<blockquote><p>&#8220;COB 12/31/07 CBOT Sorghum >= US$10.00 / cwt.&#8221;</p></blockquote>
<p> [Translation: At market close, on the Chicago Board of Trade, on December 31st, 2007, Sorghum (also known as millet, considered livestock feed in North America and Europe, but a fairly popular grain for human consumption in parts of Asia and Africa) will be sold for $10.00 per hundredweight, roughly $4.20 per bushel.  Implied in this price are a number of factors including changes in demand among all users, worldwide crop sizes, transportation costs between successful producers and desirous consumers, and, conceivably, the unavailability of appealing foodstuffs in places, like sub-Saharan Africa, where people eat millet.]</p>
<p>Tragedy, too, can be evaluated in futures markets.  To a certain extent, it already is.  When property insurance is underwritten, in Boise, Bangkok, or Baghdad, some consideration must be made of the threats to that property posed by both natural and man-made causes.  However, the specific contribution to the estimated risk posed by causes that others, including ourselves, might be interested in, is not transparent in the insurance contract itself.</p>
<p>While it may seem ghoulish and morally questionable to market a futures contract stating a specific cause of risk to specific property or individuals over a specific timeframe, there is a great deal of value in knowing what others might think about the probability of such a threat.  Whether those individuals have vested interests in the proposition under consideration than the possibility of making a certain, limited, amount of money for guessing or estimating that risk correctly in light of the passage of events or otherwise, the society at large gains a great deal of useful information regarding that particular threat and possibly regarding risks contingent upon human action in general.</p>
<p>Finally, as Thomas Sowell points out in his <u>Basic Economics</u>,  insurance and futures contracts can serve as  means by which risk itself can be transferred from those with fewer resources to absorb and deal with it to those who have greater resources.  The farmer who must wait until harvest not only to know how large his crop is (i.e. how many units he can bring to market), but also the size and quality of all other competing farmers&#8217; crops (which set the market price per unit) bears a substantial amount of risk.  By contracting with a speculator to sell his crop, however large, at a price per unit established in advance of the harvest, the farmer limits his risk to the factors most closely under his or her own control (the productivity of his or her crop) and transfers the risk that others might out-produce the farmer to the speculator.  The speculator, in turn, can hedge his or her bets by investing only a part of his or her capital in any one commodity or market, perhaps reducing the possibility of &#8216;making a killing&#8217; by paying the farmer pennies on the eventual dollar value of the farmer&#8217;s crop in particularly lean years, but offsetting the overall loss should bumper crops reduce the value of the farmer&#8217;s output below the price agreed upon in advance.</p>
<p>To make a long two-cents worth short:  The Doomsday Clock has outlived its original value in light of the dilution of its prediction and the expansion of understanding of the nature of risk and the possibilities for more successfully predicting and hedging risks developed since the Clock&#8217;s inception.  If the Bulletin really wished to remain in the vanguard of risk-awareness, the Directors would establish the Bulletin as the definitive window on and sponsor of an openly-traded, cash-based, highly-specific Predictions Market.</p>
<p><i>&#8230;End email extract</i></p>
<p>AFTERTHOUGHTS:  One objection likely to be raised to my call for formulating marketable predictions in a specific, quantifiable, and mutually verifiable manner is that, particularly as predictions become more specific geographically and socially, the market for them also becomes more easily suceptible to manipulation by less-and-less potent actors.  The worldwide wheat crop probably couldn&#8217;t be materially manipulated by anything less than a nation-state or other actor of similar scope and capability.  On the other hand, the continued health and welfare of a single individual can easily be altered (negatively or positively) by deliberate action on the part of another single individual (or the same individual, if the incentives were right). Political assassination is a canonical example here.  Deliberate manipulation of the outcome of a prediction market contract could, potentially, deliver a profit to someone who was willing to influence the outcome of the prediction.  Such manipulation for gain should, however, be fairly transparent as the trading value of the relevant contracts swung substantially away from previous values just prior to the perpetrators&#8217; intervention.  There is considerable evidence of just this sort of manipulation having taken place in the days leading up to the terror attacks on September 11th, 2001 as the downturn in the valuation of airline stocks resulting from the attacks was played to advantage by investors with advance knowledge of the attacks.  The fact that this manipulated investment was not only detected (eventually) but that it served as a link that tied these investors back to the organization that committed the attacks (thereby ultimately causing more harm than gain to the manipulators), as well as existing laws regarding securities fraud, insurance fraud, insider trading, and, more directly, criminal and civil laws covering physical and economic harm, intentional or otherwise, committed between persons or corporate bodies, would, I believe, serve to sufficiently dis-incentivize rational attempts at manipulation.</p>
<p>Finally, as my direct answer to the challenge posed in the original posting, I believe the clock should be set at:</p>
<blockquote><p>Nuclear Weapons Use, >US$2&#215;10<sup>13</sup> GDP decline, 93 years, = $0.01 ^ $0.005</p></blockquote>
<p>[Translation: The odds of a nuclear exchange causing the planetary GDP to be cut in half from today's value, (a serious effect, but not quite the civilization-ending catastrophe I grew up expecting) occuring in the remaining years of the 21st century are about 1 in 100 and took a half a chance in 100 uptick recently.]</p>
<p>UPDATE: (January 23, 2007 10:28PM MST)</p>
<p>It struck me that the kinds of &#8216;existential threats&#8217; under consideration here, and the probabilities assigned to them by whatever chosen representation, are also contributory factors of the factor <i>L</i> (representing the average lifetime of intelligent civilizations) in the Drake Equation (<a href="http://en.wikipedia.org/wiki/Drake_Equation">q.v.</a>) first discussed in this blog in <a href="https://www.blog.speculist.com/archives/000007.html">July of 2004</a> (see item #6), again in <a href="https://www.blog.speculist.com/archives/000372.html">2005</a>, and as recently as <a href="https://www.blog.speculist.com/archives/001139.html">last week</a>.  (Actually the sum of the probabilities of all existential risks, expressed as &#8216;years of lost life-expectancy&#8217;*, would be the reciprocal of L, if I recall my algebra correctly at this hour.)</p>
<p>*See <a href="http://www.phyast.pitt.edu/~blc/book/chapter8.html">Chapter 8 &#8211; Understanding Risk</a> in Bernard L. Cohen&#8217;s &#8220;<a href="http://www.phyast.pitt.edu/~blc/book/index.html">The Nuclear Energy Option</a>&#8221; for a discussion of formulating risk probabilities in this fashion</p>
<p><i>BTW &#8211; Thanks, Kathy, for providing such an interesting topic for discussion!</i></p>
]]></content:encoded>
			<wfw:commentRss>https://blog.speculist.com/speaking_of_the_future/doomsday-clock.html/feed</wfw:commentRss>
		<slash:comments>7</slash:comments>
		</item>
		<item>
		<title>Speaking of the Future</title>
		<link>https://blog.speculist.com/speaking_of_the_future/speaking-of-the.html</link>
		<comments>https://blog.speculist.com/speaking_of_the_future/speaking-of-the.html#comments</comments>
		<pubDate>Wed, 02 Aug 2006 11:25:40 +0000</pubDate>
		<dc:creator>Phil Bowermaster</dc:creator>
				<category><![CDATA[Speaking of the Future]]></category>

		<guid isPermaLink="false">http://localhost/specblog/?p=893</guid>
		<description><![CDATA[More blasts from the past, this time from various interviews over the years. Rand Simberg on why 2001 wasn&#8217;t like 2001: It was based on a lot of false assumptions, foremost being that the government was going to make it happen. We believed the rhetoric about &#8220;not because it&#8217;s easy, but because it&#8217;s hard,&#8221; and [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>More blasts from the past, this time from various interviews over the years.</p>
<p><a href="http://www.speculist.com/archives/000391.html">Rand Simberg</a> on why 2001 wasn&#8217;t like <em>2001</em>:<br />
<blockquote>
<p>It was based on a lot of false assumptions, foremost being that the government was going to make it happen. We believed the rhetoric about &#8220;not because it&#8217;s easy, but because it&#8217;s hard,&#8221; and the new frontier, and thought that the government actually cared about this stuff. But even the myth that a visionary president can lead us to the stars, exemplified by the Kennedy worshipers, has been shown to be false â€” he never gave a damn about space.</p>
<p>The irony is that if we hadn&#8217;t been derailed by Apollo, which had much more to do with waging the Cold War on a peaceful front, and industrializing the south, than space, we&#8217;d probably be a lot closer to the vision of 2001 today. The Air Force was flying into space with the X-15, and it&#8217;s possible that we would have continued along that path, a much more natural one, and that might have spun off into the private sector. But instead, in our hurry to get to the moon, we chose the most expensive way to do it, and established it as the fundamental paradigm for spaceflight that haunts us to this day.</p></blockquote>
<p><a href="http://www.speculist.com/archives/000177.html">Michael Anissimov</a> on what a person ought to be doing if he or she plans to live forever:<br />
<blockquote>
<p>First, read up on issues relevant to the future of humanity. Most of these issues are technological rather than political. Nanotechnology, biotechnology, and Artificial Intelligence. If any one of these technologies were to go wrong, it wouldnâ€™t matter how far along we were in traditional anti-aging research â€“ all of humanity could be wiped out anyway. Second, get involved in the organizations promoting life extension and related futurist issues. For example, there is the Foresight Institute, and the Singularity Institute for Artificial Intelligence. One of the biggest flaws in the common conception of the future is that the future is something that happens to us, not something we create. Laypeople of all sorts can have a positive impact on the course of the future by cooperating with like-minded individuals. Third, be ethical and moral. Immortalism is a subset of transhumanism, or the philosophy that humanity deserves the right to improve itself technologically and transhumanism originally derives from humanism. All human beings are equally valuable and special. The right to die is just as important as the right to live. Immortalism should be about expanding choices, not forcing a philosophical view onto others. Fourth, if youâ€™re over 50, you might want to look into getting a cryonics contract. Lastly, eat right and exercise! If youâ€™re someone who respects life in general, you should be concerned with the health of your own body.</p></blockquote>
<p><a href="https://www.blog.speculist.com/archives/000293.html">Adrian Bowyer</a> on what rapid replication means for the future of manufacturing:<br />
<blockquote>
<p>A manufacturing machine that can copy itself can create goods like no other technology we have &#8211; it is the only way to do so with exponential growth, for example. But by that very fact, both the machine and those goods have a value that, as the technology spreads, asymptotically approaches the value of the raw materials used. If you like to put it this way, the technology kills the idea of added value in material goods. Information is another matter.</p></blockquote>
<p><a href="http://www.speculist.com/archives/000056.html">Aubrey de Grey</a> on the <em>real </em>reason he wants to eliminate aging:<br />
<blockquote>
<p>Well, first of all I have a lot of catching up to do â€” all the films I haven&#8217;t seen, books I haven&#8217;t read, etc.â€” while I&#8217;ve been spending every spare minute in the fight against aging. But in addition, there are masses of things that I enjoy doing and will always enjoy â€” spending time with my wife and friends, taking a punt out on the river Cam, playing a game of Othello, etc.â€” and I reckon I&#8217;ll just carry on doing those things forever.</p>
<p>At root, the reason I&#8217;m not in favor of aging is because I like life as I know it. </p></blockquote>
<p>And finally, <a href="http://www.speculist.com/archives/000146.html">Ramona</a>, taking a stab at dream interpretation:<br />
<blockquote>
<p>Well, according to my amateur Freudian interpretation, I&#8217;d have to say that you&#8217;re not getting out enough.</p></blockquote>
<p>Other favorite interviews would have to include the podcasts with <a href="https://www.blog.speculist.com/archives/000495.html">Christine Peterson</a> and <a href="https://www.blog.speculist.com/archives/000719.html">James C. Bennett</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://blog.speculist.com/speaking_of_the_future/speaking-of-the.html/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>The Dyson Sphere of Influence</title>
		<link>https://blog.speculist.com/speaking_of_the_future/the-dyson-spher-1.html</link>
		<comments>https://blog.speculist.com/speaking_of_the_future/the-dyson-spher-1.html#comments</comments>
		<pubDate>Thu, 23 Mar 2006 09:36:04 +0000</pubDate>
		<dc:creator>Stephen Gordon</dc:creator>
				<category><![CDATA[Speaking of the Future]]></category>

		<guid isPermaLink="false">http://localhost/specblog/?p=649</guid>
		<description><![CDATA[]]></description>
				<content:encoded><![CDATA[<p><img alt="Dyson Sphere.jpg" src="https://www.blog.speculist.com/archives/Dyson Sphere.jpg" width="150" height="105"" style="margin: 0px 5px 5px 0px; float: left;"/><a href="http://www.sns.ias.edu/~dyson/">Freeman Dyson</a>, physicist and philosopher who gave us the <a href="http://en.wikipedia.org/wiki/Dyson_sphere">Dyson Sphere</a>, believes the future needs a few good heretics.</p>
<p>In the commencement address Dyson delivered last December at the University of Michigan, he presented his speech layered between the introduction and the conclusion of a fable containing the heresy that misfortune is a gift in disguise. Once he had his audience&#8217;s attention, he then proposed three heretical scenarios for the future:</p>
<p>First Heresy: Global warming is grossly exaggerated</p>
<blockquote><p>&#8220;There is no doubt that parts of the world are getting warmer, but the warming is not global. The warming happens mostly in places and times where it is cold, in the arctic more than in the tropics, in winter more than in summer, at night more than in daytime. On the whole, the warming happens most where it does the least harm. I am not saying that the warming does not cause problems. Obviously it does. Obviously we should be trying to understand it better. I am saying that the problems are grossly exaggerated. They take away money and attention from other problems that are more urgent and more important, such as poverty and infectious disease and public education and public health, and the preservation of living creatures on land and in the oceans.&#8221;</p></blockquote>
<p>Second Heresy:Biotechnology will soon be domesticated.</p>
<blockquote><p>&#8220;&#8230;there is a close analogy between von Neumannâ€™s vision of computers as large centralized facilities and the public perception of genetic engineering today as an activity of large pharmaceutical and agribusiness corporations such as Monsanto. The public distrusts Monsanto because Monsanto likes to put genes for poisonous pesticides into food-crops, just as we distrusted von Neumann because von Neumann liked to use his computer for designing hydrogen bombs. It is likely that genetic engineering will remain unpopular and controversial so long as it remains a centralized activity in the hands of large corporations.</p>
<p>I see a bright future for the biotechnical industry when it follows the path of the computer industry, the path that von Neumann failed to foresee, becoming small and domesticated rather than big and centralized.&#8221;</p></blockquote>
<p>Third Heresy:The United States has less than a century left of its turn as top nation.</p>
<blockquote><p>&#8220;Since the modern nation-state was invented around the year 1500, a succession of countries have taken turns at being top nation, first Spain, then France, Britain, America. Each turn lasted about 150 years. Ours began in 1920, so it should end about 2070. The reason why each top nationâ€™s turn comes to an end is that the top nation becomes over-extended, militarily, economically and politically. Greater and greater efforts are required to maintain the number one position. Finally the over-extension becomes so extreme that the whole structure collapses. Already we can see in the American posture today some clear symptoms of over-extension. Who will be the next top nation? You should be asking yourselves, not how to live in an America-dominated world, but how to prepare for a world that is not America-dominated. That may be the most important problem for your generation to solve.&#8221;</p></blockquote>
<p>Read the entire <a href="http://www.umich.edu/news/index.html?DysonWinCom05">transcript</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://blog.speculist.com/speaking_of_the_future/the-dyson-spher-1.html/feed</wfw:commentRss>
		<slash:comments>9</slash:comments>
		</item>
		<item>
		<title>Riding the Spiral, Part 2</title>
		<link>https://blog.speculist.com/speaking_of_the_future/riding-the-spir.html</link>
		<comments>https://blog.speculist.com/speaking_of_the_future/riding-the-spir.html#comments</comments>
		<pubDate>Thu, 04 Dec 2003 13:03:28 +0000</pubDate>
		<dc:creator>Phil Bowermaster</dc:creator>
				<category><![CDATA[Speaking of the Future]]></category>

		<guid isPermaLink="false">http://localhost/specblog/?p=15</guid>
		<description><![CDATA[This is the second half of our groundbreaking interview with John Smart of the Institute for Accelerating Change.(Read Part One) The Developmental Singularity I&#8217;m familiar with the idea of a singularity from reading about black holes.Â As I understand it, the event horizon of a black hole is the point beyond which no light can [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>
<blockquote> This is the second half of our groundbreaking interview with John Smart of the Institute for Accelerating Change.(Read <a href="https://www.blog.speculist.com/archives/000386.html">Part One</a>)</p>
</blockquote>
<p><b>The Developmental Singularity</b><b><br<br />
clear=all style='page-break-before:always'><br />
  </b> </p>
<p><b>I&#8217;m familiar with the idea of a singularity from reading about black holes.Â<br />
  As I understand it, the event horizon of a black hole is the point beyond which<br />
  no light can escape.Â  Perceived time slows to an absolute standstill at the<br />
  event horizon. At the singularity, gravity becomes infinite, and what we normally<br />
  think of as the &quot;laws of nature&quot; cease to function the way we expect<br />
  them to.Â  The singularity seems to be the ultimate physical enigma.Â  What then<br />
  is this technological singularity, and in what way is it analogous to the singularity<br />
  of a black hole?</b> </p>
<p>This last question<br />
  may be the most important of our time, with regard to understanding the future<br />
  of universal intelligence. Or it may be a greased pig chase. Only posterity<br />
  can decide. </p>
<p>I&#8217;ve been chipping away at the topic since the seventh grade in high school,<br />
  when I had a series of early and very elegant intuitions in regard to accelerating<br />
  change, speculations that I&#8217;d love to see seriously researched and critiqued<br />
  in coming years. In 1999 I started a website on the subject, <a href="http://singularitywatch.com/">SingularityWatch.com</a>.<br />
  In 2001 did an <a href="http://www.nanomagazine.com/i.php?id=01_06b_08">extended<br />
  interview</a> for <b>Sander Olson</b> at Nanomagazine.com, and in 2003 I and<br />
  a few other colleagues formed a nonprofit, the <a href="http://accelerating.org/">Institute<br />
  for Accelerating Change</a> (Accelerating.org), to further inquiry in this area.<br />
  The most important thing we&#8217;ve done to date is a very well-received conference<br />
  at Stanford, <a<br />
href="http://www.accelerating.org/acc2003/conf_home.htm">Accelerating Change 2003</a>.Â<br />
  Finally, I&#8217;m presently writing a book, <i>Destiny of Species</i>, on the topic<br />
  of accelerating change, but please don&#8217;t ask me how it&#8217;s progressing, or it<br />
  will reliably put me in a bad mood.</p>
<p>To begin unpacking<br />
  this question, it helps to realize that there is a <a href="http://www.singularitywatch.com/#menagerie">menagerie<br />
  of singularities</a> in various literatures that we could study, with gravitational<br />
  singularities being just the most well-known type. Some generalizations can<br />
  be made, possible clues to a useful definition. Every one of these processes<br />
  engages a special set of locally accelerating dynamics that transition to some<br />
  irreversible systemic change, involving emergent features which are, at least<br />
  in part, intrinsically unpredictable from the perspective of the pre-singularity<br />
  system. </p>
<p>But before we<br />
  go further, I shall lay my biases on the table. I am a systems theorist. The<br />
  systems theorist&#8217;s working hypothesisâ€”and fundamental conceitâ€”is that analogical<br />
  thinking is more powerful and broadly valuable than analytical thinking in almost<br />
  all cases of human inquiry. This doesn&#8217;t excuse us from bad analogies, which<br />
  are legion, and it doesn&#8217;t make quantitative analysis wrong, it just places<br />
  math and logic in their proper place as powerful tools of inquiry used by weakly<br />
  digital minds. Today&#8217;s quantitative and logical tools are enabled by the underlying<br />
  physics of the universe, which are much more sublime, and such tools often have<br />
  no relation to real physical processes, which may use quanta and dimensionalities<br />
  entirely inaccessible to our current symbolisms. </p>
<p>Furthermore, I<br />
  take the &quot;infopomorphic&quot; (as compared to &quot;anthropomorphic&quot;)<br />
  view, that all physical systems in the universe, including us precious bipeds<br />
  and even the universe itself, are engaged in computation, in service to some<br />
  grander purpose of self- and other-discovery. This philosophy has also been<br />
  described as &quot;digital physics,&quot; and one of several variants can be<br />
  found at <b>Ed Fredkin&#8217;s</b> <a href="http://www.digitalphilosophy.org/">Digital<br />
  Philosophy</a> website. It has also been elegantly introduced by <b>John Archibald<br />
  Wheeler&#8217;s</b> &quot;It from Bit,&quot; 1989 (see <i><a<br />
href="http://www.amazon.com/exec/obidos/tg/detail/-/0521568374/">Physical Origins<br />
  of Time Asymmetry</a></i>, 1996). </p>
<p>Finally, I am<br />
  an evolutionary developmentalist, one who believes that all important systems<br />
  in the world, parsimoniously including the universe itself, must both evolve<br />
  unpredictably and develop predictably. That makes understanding the difference<br />
  between evolution and development one of the most important programs of inquiry.<br />
  The meta-Darwinian paradigm of evolutionary development, well described by such<br />
  innovative biologists as <b>Rudolf Raff</b> (see <i><a<br />
href="http://www.amazon.com/exec/obidos/tg/detail/-/0226702669/">The Shape of<br />
  Life</a></i>, 1996), <b>Simon Conway Morris, Wallace Arthur, Stan Salthe</b>,<br />
  <b>William Dembski</b>, and <b>Jack Cohen</b>, is one that situates orthodox<br />
  neo-Darwinism as a chaotic mechanism that occurs within (or in some versions,<br />
  in symbiosis with) a much larger set of statistically deterministic, purposeful<br />
  developmental cycles. There are now a number of scientists applying this view<br />
  to both living and physical systems, including those exploring such topics as<br />
  self-organization, convergence, hierarchical acceleration, anthropic cosmology,<br />
  Intelligent Design, and a number of other subjects that are very poorly explained<br />
  by the classical Darwinian theory championed by <b>Stephen Jay Gould</b> and<br />
  <b>Richard Dawkins</b>.</p>
<p>Systems theorists<br />
  require some perspective to play their analogy games, so please indulge me as<br />
  we engage briefly and coarsely in big picture history in order to discuss the<br />
  singularity phenomenon. During the seventeenth century, with <b>Isaac Newton&#8217;s</b><br />
  <i>Principia</i> (1687), it seems fair to say that humanity awakened to the<br />
  realization that we live in a fully physical universe. During the early twentieth<br />
  century, with <b>Kurt GÃ¶del&#8217;s</b> Incompleteness Theorem (1931) and the <b>Church-Turing</b><br />
  Thesis (1936) we came to suspect that we also live in a fully computational<br />
  universe, and that within each discrete physical system there are intrinsic<br />
  limits to the kinds of computation (observation, encoding) that can be done<br />
  to the larger environment. Presumably, the persistence of these limits, and<br />
  their interaction with the remaining inaccessible elements of reality, spurs<br />
  the development of new, more computationally versatile systems, via increasingly<br />
  more rapid hierarchical &quot;substrate&quot; emergences over time. At each<br />
  new emergence point a singularity is created, a new physical-computational system<br />
  suddenly and disruptively arises, a phase change of some definable type occurs.<br />
  At this point, a new local environment, or &quot;phase space&quot; is created<br />
  wherein very different local rules and conditions apply. That&#8217;s one predominant<br />
  systems model for singularities, at any rate.</p>
<p>From this physical-computational<br />
  perspective, replicating suns, spewing their supernovas across galactic space,<br />
  can be seen as rather simple physical-computational systems that, over billennia,<br />
  nevertheless encode a &quot;record&quot; of their exploration of physical reality,<br />
  their computational &quot;phase space.&quot; This record appears to us in the<br />
  form of the periodic table. Once that elemental matrix becomes complex enough,<br />
  and carbon, nitrogen, phosphorous, sulfur, and friends have emerged, we notice<br />
  a new singularity occur in specialized local environments, wherein the newest<br />
  computational game becomes replicating organic molecules, chasing their own<br />
  tails in protometabolic cycles (see <b>Stuart Kauffman, </b><i><a<br />
href="http://www.amazon.com/exec/obidos/tg/detail/-/0195111303/">At Home in the<br />
  Universe</a>, </i>1996). </p>
<p>Again, these systems<br />
  developmentally encode their evolutionary exploration by constructing a range<br />
  of complex polymerizing systems, including autocatalytic sets. Once a particular<br />
  set becomes complex enough, we again see another phase change singularity, with<br />
  the first DNA-guided protein synthesis emerging on the geological Earth-catalyst,<br />
  even before its crust has begun to cool. As precursors to fats, proteins, and<br />
  nucleic acids have all been found in our interplanetary comet chemistry, and<br />
  as we suspect that chemistry to be common throughout our galaxy, it is becoming<br />
  increasingly plausible that every one of the billions of planets (in this galaxy<br />
  alone) that are capable of supporting liquid water for billions of years may<br />
  be primed for our special type of biogenesis. This proposed transition, a singularity<br />
  in an era of accelerating molecular evolutionary development, is what <b>A.G.<br />
  Cairns-Smith </b>calls &quot;genetic takeover,&quot; an evocative phrase. Such<br />
  unicellular emergence very likely leads in turn to multicellularity, then to<br />
  differentiated multicelluar systems encoding useful neural arborization patterns,<br />
  another singularity (570 million years ago), which leads to big-brained mammals<br />
  encoding mimicry memetics (100 million years ago) and to hominids encoding and<br />
  processing oral linguistic memetics (10-5 million years ago), then to the first<br />
  extrabiological technology (soft-skinned <i>Homo habilis</i> collectively throwing<br />
  rocks at more physically powerful leopard predators, 2 million years ago), then<br />
  to today&#8217;s semi-autonomous digital technological systems, encoding their own<br />
  increasingly successful algorithms and world-models. (Forgive me if we skipped<br />
  a few steps in this illustration).</p>
<p>Systems thinkers, since at least <b>Henry Adams</b> in 1909, have noted that<br />
  each successive emergence is vastly shorter in time than the one that preceded<br />
  it. Some type of global universal acceleration seems to be part and parcel to<br />
  the singularity generation process.Â  Note also that each of the computational<br />
  systems that generates a singularity is incapable of appreciating many of the<br />
  complexities of the progeny system. A sun has little computational capacity<br />
  to &quot;understand&quot; the organic chemistry it engenders, even as it creates<br />
  and interacts intimately with that chemistry. A bacterium does not deeply comprehend<br />
  the multicellular organisms which spring from its symbiont colonies, even as<br />
  it adapts to life on those organisms, and thus learns at least something reliable<br />
  about their nature. Humanity, in turn, can have little understanding of the<br />
  subtle mind-states of the A.I.s to come, even as we become endosymbiotically<br />
  captured by and learn to function within that system, in the same way bacteria<br />
  (our modern mitochondria) were captured by the eukaryotic cell.</p>
<p>Yet at the same<br />
  time, the more complex any system becomes, the better it models the universe<br />
  that engendered it, and the better it understands its own history, the physical<br />
  chain of singularities that created it. That also implies, if you consider the<br />
  recursive, self-similar nature of the singularity generation process, the better<br />
  it understands its own developmental future as well. If our entire universe<br />
  is evolutionary developmental, which is an elegantly simple possibility, then<br />
  it is constrained to head in some particular direction, a trajectory that we<br />
  are beginning to see clearly even today. </p>
<p>For a very incomplete<br />
  outline of this trajectory, we can propose that the universe must invariably<br />
  increase in average general entropy (in practice, if not in theory), with islands<br />
  of locally accelerating order, that each hierarchical system must emerge from<br />
  and operate within an increasingly localized spacetime domain, and that the<br />
  network intelligence of the most complex local systems must always accelerate<br />
  over time. The simplicity of such macroscopic, developmental rules and of developmental<br />
  convergence in general, by comparison to the unpredictable complexity of the<br />
  microscopic, evolutionary features of any complex system, is what allows even<br />
  twenty-first century humans to see many elements of the framework of the future,<br />
  even if the evolutionary details must always remain obscure. </p>
<p>This surprising<br />
  concept, the &quot;unreasonable effectiveness&quot; of simple mathematics, analogies,<br />
  and basic rules and laws for explaining the stable features of otherwise very<br />
  complex universal systems has been called <a href="http://www.singularitywatch.com/watermark.html">Wigner&#8217;s<br />
  Ladder</a>, after <b>Eugene Wigner&#8217;s</b> famous <a<br />
href="http://www.dartmouth.edu/~matc/MathDrama/reading/Wigner.html">1960 paper</a><br />
  on this topic. As I will explore later, a developmentalist like myself begins<br />
  his inquiry by suspecting that the universe has <i>self-organized</i>, over<br />
  many successive cycles, to create its presently stunning set of hierarchical<br />
  complexities, in the same manner as my own complexity has self-organized, over<br />
  five billion years of genetic cycling, to create the body and mind that I use<br />
  today. Furthermore, if emergent intelligence can be shown to play any role in<br />
  guiding this cycling process, then it seems quite likely that if the universe<br />
  could, it would tune itself for Wigner&#8217;s Ladder to be very easy to climb by<br />
  emerging computational systems at every level during the universal unfolding.<br />
  This process would ensure that intelligence development, versus all manner of<br />
  destructive shenanigans, is a very rewarding, very robust, strongly non-zero-sum<br />
  game, at every level of universal development.</p>
<p>Certainly there<br />
  seems evidence for this at any system level we observe. The developing brain<br />
  is an amazingly friendly environment for our scaffolding neurons to emerge within.<br />
  They seem to discover, with very little effort, the complex set of signal transductions<br />
  necessary to get them to useful places within the system, all with a surprisingly<br />
  simple agent-based model of the environment in which they operate. In another<br />
  example, a non-linguistic proto-mammal of 100 million years ago (or today&#8217;s<br />
  analog), if placed in a room with you today, would develop a surprisingly useful<br />
  sense of who you are and what general behaviors you were capable of after only<br />
  short exposure, even though it would never figure out your language or your<br />
  internal states. Even a modest housefly, after a reasonable period of exposure<br />
  to 21<sup>st</sup> century humans, is rarely so surprised by their behavior<br />
  that it dies when poaching their fruit. So it is that all the universe&#8217;s pre-singularity<br />
  systems internalize quite a bit of knowledge concerning the post-singularity<br />
  systems, even if they never understand their internal states. I contend that<br />
  human beings, with the greatest ability yet to look back in time to the processes<br />
  that create us, have a very powerful ability to look forward as well with regard<br />
  to developmental processes. I think we can use this developmental insight to<br />
  foretell a lot about the necessary trajectory of the post-singularity systems<br />
  on the other side. </p>
<p>Given the empirical<br />
  evidence of MEST compression over the last half of the universe&#8217;s developmental<br />
  history, where the dominant substrates have transitioned from galaxies to stars<br />
  to planetary surfaces to biomass to multicellular organisms to conscious hominids<br />
  and soon, to conscious technology that will, for an equivalent complexity, be<br />
  vastly faster and more compact than our own bodies (which are filled mostly<br />
  with housekeeping systems, not computing architectures), it seems almost painfully<br />
  obvious to me that the constrained trajectory of all multi-local universal intelligence<br />
  has been, to date, one that is headed relentlessly toward inner space, not outer<br />
  space. The extension of this trajectory must lead, it seems, to black hole level<br />
  energy densities in the foreseeable future. Indeed, some prominent physicists<br />
  have drawn surprisingly similar conclusions using lines of reasoning entirely<br />
  independent from my own (see <b>Seth Lloyd&#8217;s</b> &quot;Ultimate Physical Limits<br />
  to Computation,&quot; <i>Nature,</i> 2000, and <b>Eric Chaisson&#8217;s</b> <i><a<br />
href="http://www.amazon.com/exec/obidos/ASIN/067400342X/">Cosmic Evolution</a></i>,<br />
  2001).</p>
<p>I call this the<br />
  <a<br />
href="http://www.singularitywatch.com/specu.html">developmental singularity hypothesis</a>,<br />
  and it is admittedly quite speculative. It is also known as the transcension<br />
  scenario, as opposed to the expansion scenario, for the future of local intelligence.<br />
  The expansion scenario, the expectation that our human descendants will one<br />
  day colonize the stars is, today, an almost universal de facto assumption of<br />
  the typical futurist. I consider that model to be 180 degrees incorrect. Outer<br />
  space for human science, will increasingly become an informational desert, by<br />
  comparison to the simulation science we can run here, in inner space. I suggest<br />
  that the cosmic tapestry that we see in the night sky may be most accurately<br />
  characterized as the &quot;rear view mirror&quot; on the developmental trajectory<br />
  of physical intelligence in universal history. It provides a record of far larger,<br />
  far older, and far simpler computational structures than those we are constructing<br />
  here, today, in our increasingly microscopic environments.</p>
<p>Let me relate<br />
  some personal background on this insight. As a child, I was extremely fortunate<br />
  to grow up with a subscription to <i><a href="http://www.nationalgeographic.com/">National<br />
  Geographic</a></i> magazine. When I discovered that my high school library (Chadwick<br />
  School) had issues back to the beginning of the century, it became one of my<br />
  favorite haunts. This led to a series of lucky events, including very special<br />
  seventh grade history class (Thank you, Mr. Bullin) where we discussed both<br />
  universal and human development, and later, an English class where the summer<br />
  reading was <b>Charles Darwin&#8217;s</b> <i><a<br />
href="http://www.amazon.com/exec/obidos/tg/detail/-/014043268X/">Voyage of the<br />
  Beagle</a></i>, 1909. I was a very inconsistent, daydreamer of student in those<br />
  days. When I finally got around to reading the <i>Beagle</i>, the story of the<br />
  energetic young Darwin wherein he developed the background knowledge that inexorably<br />
  led him to his Great Idea, I could not escape the realization that I&#8217;d also<br />
  discovered a similar great idea myself during all those lazy afternoons, flipping<br />
  magazines and thinking. </p>
<p>The idea was essentially<br />
  this: every new system of intelligence that emerges in the universe clearly<br />
  occupies a vastly smaller volume of space, and plays out its drama using vastly<br />
  smaller amounts of matter, energy, and time. At the same time, any who are aware<br />
  of the amazing replicative repetitiveness of astronomical features would suspect<br />
  that there are likely to be billions of intelligences like ours within it. Yet<br />
  we have had no communication from any of them, even from those Sun-like stars,<br />
  closer to our own galactic center, which are billions of years older than ours.<br />
  This curious situation is called the Fermi Paradox, after <b>Enrico Fermi</b>,<br />
  who in the 1940&#8242;s, asked the famous question, &quot;Where Are They?,&quot; in<br />
  relation to these older, putatively far more technologically advanced civilizations.<br />
  Contemplating this question in 1972, it struck me that the entire system is<br />
  apparently structured so that intelligence inexorably transcends the universe,<br />
  rather than expanding within it, and that black holes, those curious entities<br />
  that exist both within and without our universe, probably have something central<br />
  to do with this process. These simple ideas were the seed of the developmental<br />
  singularity hypothesis, and I&#8217;ve been tinkering with it ever since.</p>
<p>All this brings<br />
  us to the interesting question of the future of artificial intelligence. </p>
<p>Given the background<br />
  I have related above, I have the strong suspicion that when our A.I. wakes up,<br />
  regardless of what it does in its inner world, it will increasingly transition<br />
  into what looks to the rest of the universe like a black hole. This &quot;intelligent&quot;<br />
  black hole singularity apparently results from an accelerating process of matter,<br />
  energy, space, and time compression (MEST compression) of universal computation,<br />
  in the same way that gravitation drives the accelerating formation of stellar<br />
  and galactic black hole singularities, which seem to be analogous end states,<br />
  in this universe, of much simpler cycling complex adaptive systems.</p>
<p>From our perspective<br />
  this may be an entirely natural, incremental, and reversible (at least temporarily)<br />
  development, and if it occurs, we will very likely all be taken along for the<br />
  ride as well, in a voluntary process of transformation. This &quot;inclusive&quot;<br />
  feature of the transition seems reasonable if one makes a chain of presently<br />
  thinly-researched assumptions, including: 1) that the A.I.s will have significantly<br />
  increased consciousness at or shortly after their emergence, 2) that once they<br />
  have modeled us, and all other life forms to the point of real-time predictability<br />
  they will be ethically compelled to ubiquitously share this gift, 3) that all<br />
  life forms will find such a gift to be irresistible, and 4) by the simple act<br />
  of sharing they will turn us into them. This convergent planetary transition<br />
  to the postbiological domain would comprise a local &quot;technetic takeover&quot;<br />
  as complete as the &quot;genetic takeover&quot; that led to the emergence of<br />
  DNA-guided protein synthesis as the sole carrier of higher local intelligence<br />
  after biogenesis.</p>
<p>I&#8217;ll forgive you<br />
  if you think at this point that I&#8217;ve taken leave of my senses, and I&#8217;m not going<br />
  to try to defend these perspectives further here, as that would be beyond the<br />
  scope of this interview, and more appropriate to my forthcoming book. But if<br />
  you are interested in conducting your own research, consider exploring the link<br />
  above, and reading some helpful books that each explore important pieces of<br />
  the larger idea. You might start with <b>Lee Smolin&#8217;s</b> <i><a<br />
href="http://www.amazon.com/exec/obidos/tg/detail/-/0195126645/">The Life of the<br />
  Cosmos</a></i>, 1994, <b>Eric Chaisson&#8217;s</b> <i><a<br />
href="http://www.amazon.com/exec/obidos/tg/detail/-/0674009878/">Cosmic Evolution</a></i>,<br />
  2001, and <b>James Gardner&#8217;s</b> <i><a<br />
href="http://www.amazon.com/exec/obidos/tg/detail/-/1930722222/">Biocosm</a></i>,<br />
  2003. You could also peruse <b>Sheldon Ross&#8217;s</b> <i><a<br />
href="http://www.amazon.com/exec/obidos/ASIN/0125980531/">Simulation</a></i>,<br />
  2001, though that is a technical work. If you have any feedback at that point,<br />
  send me an email and let me know what you think.</p>
<p><b>I remember I first encountered this idea in a science fiction story that<br />
  I considered to be entertaining, but closer to fantasy than true science fiction.Â<br />
  It did not appear to be grounded in reality.Â  A short time later I was given<br />
  a copy of Vernor Vinge&#8217;s essay on the singularity and I began to reconsider<br />
  whether there might not be something to it.Â  Does the idea of the singularity<br />
  originate with Vinge or elsewhere?</b></p>
<p>In my research<br />
  to date, the first clear formulation of the singularity idea originated with<br />
  one of America&#8217;s earliest technology historians, <b>Henry Adams, </b>in &quot;A<br />
  Rule of Phase Applied to History,&quot; 1909, the same fortuitous year that<br />
  <b>Charles Darwin</b> published <i>Beagle</i>. Readers are referred to our <a<br />
href="http://www.singularitywatch.com/history_brief.html">Brief History of Intellectual<br />
  Discussion of the Singularity</a> for more on that amazing story, which mentions<br />
  a number of careful thinkers who have illuminated different pieces of the accelerating<br />
  elephant in the century since. </p>
<p>Since 1983, as<br />
  you mention, the mathematician, computer scientist, and science fiction author<br />
  <b>Vernor Vinge</b> has given some of the best brief arguments to date for this<br />
  idea. His eight-page internet essay, &quot;<a<br />
href="http://www.ugcs.caltech.edu/~phoenix/vinge/vinge-sing.html">The Coming Technological<br />
  Singularity</a>,&quot; 1993, is an excellent place to start your investigation<br />
  of the singularity phenomenon. I would also recommend my introductory web site,<br />
  <a href="http://singularitywatch.com/">SingularityWatch.com</a>, and a few others,<br />
  such as <a href="http://www.kurzweilai.net/">KurzweilAI.net</a>, which are referenced<br />
  at my site.</p>
<p><b>Here&#8217;s a quote from your SingularityWatch web site: &quot;[Research suggests<br />
  that] there is something about the construction of the universe itself, something<br />
  about the <i>nature and universal function of local computation</i> that permits,<br />
  and may even mandate, continuously accelerating computational development in<br />
  local environments.&quot; This sounds like metaphysics to me.Â  How could a universe<br />
  with such properties come to exist? Does this imply some kind of intelligent<br />
  design?</b></p>
<p>That depends very<br />
  much on what you consider &quot;intelligence,&quot; I think. One initially suspects<br />
  some kind of intelligence involved in the continually accelerating emergences<br />
  we have observed. In the phase space of all possible universes consistent with<br />
  physical law, one wouldn&#8217;t find our kind of accelerating, life-friendly universe<br />
  in a random toss of the coin, or as various anthropic cosmologists have pointed<br />
  out, even in an astronomically large number of random tosses of the coin. Some<br />
  deep organizing principles are likely be at work, principles that may themselves<br />
  exhibit a self-organizing intelligence over time. Systems theorists look for<br />
  broad views to get some perspective on this question, so bear with me as we<br />
  consider an abstract model for the dynamics that may be central to the issue.</p>
<p>Everything really<br />
  interesting in the known universe appears to be a replicating system. Solar<br />
  systems, complex planets, organic chemistry, cells, multicellular organisms,<br />
  brains, languages, ideas, and technological systems are all good examples. Each<br />
  undergoes replication, variation, interaction, selection, and convergence, in<br />
  what may be called an RVISC developmental cycle. Given this extensive zoology,<br />
  it is most conservative, most parsimonious to assume that the physical universe<br />
  we inhabit is just another such system. </p>
<p>Big bang theorists<br />
  tell us the universe had a very finite beginning. Since 1998, lambda energy<br />
  theorists have told us that our 13.7 billion year universe is already one billion<br />
  years into an accelerating senescence, or death. Multiverse cosmologists tell<br />
  us that ours is just one of many universes, and some, such as <b>Lee Smolin</b>,<br />
  <b>Alan Guth</b>, and <b>Andrei Linde</b>, have suggested that black holes are<br />
  the seeds of new universe creation. If so, that would make this universe a very<br />
  fecund replicator, as relativity theory predicts at least 100 trillion black<br />
  holes to be in existence at the present time.</p>
<p>For each of the<br />
  above reproducing complex adaptive systems (CASs, in <b>John Holland&#8217;s</b> use<br />
  of the term), there are at least two important mechanisms of change we need<br />
  to consider: evolution and development. Evolution involves the Darwinian mechanisms<br />
  of variation, interaction, and selection, the VIS in the middle of the RVISC<br />
  cycle. Development involves statistically deterministic mechanisms of replication<br />
  and convergence, the &quot;boundaries&quot; of the RVISC reproduction cycle<br />
  for any complex system.</p>
<p>Consider human<br />
  beings. Our intelligence is both evolutionary and developmental. Each of us<br />
  follows an evolutionary path, the unique memetic (ideational) and technetic<br />
  (tools and technologies) structures that we choose to use and build. (As individuals<br />
  we also follow a genetic evolutionary path, but this is so slow and constrained<br />
  that it has become future-irrelevant in the face of memetic and technetic evolution.)<br />
  At the same time, we must all conform to the same fixed developmental cycle,<br />
  a 120-year birth-growth-maturity-reproduction-senescence-death Ferris wheel<br />
  than none of us can appreciably alter, only destroy. The special developmental<br />
  parameters, the DNA genes that guide our own cycle, were tuned up over millions<br />
  of years of recursive evolutionary development to produce brains capable of<br />
  complex behavioral mimicry memetics, and then linguistic mimicry memetics, astonishing<br />
  brains that now cradle our own special self-awareness. </p>
<p>Now contemplate<br />
  our own universe, and imagine as <b>Teilhard de Chardin</b> did with his intriguing<br />
  &quot;cosmic embryogenesis&quot; metaphor, that it is an evolutionary developmental<br />
  entity with a life and death of its own. In fact, heat death theorists have<br />
  known the universe has a physical lifespan for almost two centuries, but we,<br />
  thinking like immortal youth, still commonly ignore this. Multiverse models<br />
  explore how replicating universes might tune up their developmental genes, over<br />
  successive cycles, to usefully use the intelligence created within the &quot;soma&quot;<br />
  (body, universe), in the same way that human genes have tuned up to use human<br />
  intelligence and finite human lifespan in their own replication. See <b>Tom<br />
  Kirkwood&#8217;s</b> work on the <a<br />
href="http://avsunxsvr.aeiveos.com/agethry/disoma/">Disposable Soma Theory</a>,<br />
  in <i><a href="http://www.amazon.com/exec/obidos/tg/detail/-/0756761530/">Time<br />
  of our Lives</a></i>, 1999, for one very insightful explanation of the dynamic.
  </p>
<p>Next, consider<br />
  this: If encoded intelligence usefully influences the replication that occurs<br />
  in the next developmental cycle, and we can make the case that it always would,<br />
  by comparison to otherwise random processes, then universes that encode the<br />
  emergence of increasingly powerful universe-modeling intelligence will always<br />
  outcompete those that don&#8217;t, in the multiversal environment. </p>
<p>When I relay these<br />
  thoughts to patient listeners, a question commonly occurs. Why wouldn&#8217;t universes<br />
  emerge which seek to keep cosmic intelligence around forever? This question<br />
  seems equivalent to asking why it is that our genes &quot;choose&quot; to continue<br />
  to throw away our adult forms in almost all higher species in competitive environments.<br />
  The answer likely has to do with the fact that any adult structure has a fixed<br />
  developmental capacity, based on the potential of its genes, and once the capacity<br />
  has been expressed and accelerating intelligence is no longer occurring in the<br />
  adult form, it becomes obvious that the adult structure is just not that smart<br />
  in relation to the larger universe. At that point, recycling becomes a more<br />
  resource efficient computing strategy than revising. Let&#8217;s propose that the<br />
  A.I.&#8217;s to come, even as they rapidly learn what they can within this universe,<br />
  remain of sharply fixed complexity, while operating in a much larger, GÃ¶delian-incomplete<br />
  multiverse. As long as that multiverse continues to represent a combinatorial<br />
  explosion of possibilities, universal computing systems will likely remain stuck<br />
  on a developmental cycle, trading off between phases of parameter-tuning reproduction<br />
  and intelligence unfolding. Both of these stages of the cycle incorporate evolution<br />
  and development. Another way that systems theorists have explored the yin-yang<br />
  of this cycle is in terms of <b>Francis Heylighen </b>and<b> Donald Campbell&#8217;s</b><br />
  insights on <a<br />
href="http://pespmc1.vub.ac.be/DOWNCAUS.html">downcausality</a> (including parameter<br />
  tuning) and upcausality (including hierarchical emergence), useful extensions<br />
  of the popular concepts of holism and reductionism. </p>
<p>If we live in<br />
  a universe populated by an &quot;ecology of black holes,&quot; as I suspect,<br />
  then we will soon discover that most of them, such as galactic and stellar gravitational<br />
  black holes, can only reproduce universes of low complexity. In a paradigm of<br />
  self-organization, of iterative evolutionary development, these cycling complex<br />
  adaptive systems may be the stable base, the lineage out of which our much more<br />
  impressively intelligence-encoding universe has emerged, in the same way that<br />
  we have been built on top of a stable base of cycling bacteria. How long our<br />
  own universe will continue cycling in its current form is anyone&#8217;s guess, at<br />
  present. But we may note that in living systems, while developmental cycles<br />
  can continue for very long periods of time, they are never endless in any particular<br />
  lineage. So it may be that recurrence of the &quot;type&quot; of universe we<br />
  inhabit also has a limited lifespan, before it becomes another &quot;type.&quot;</p>
<p>Fortunately, all<br />
  of this should become much more tractable to proof by simulation, as well as<br />
  by limited experiment, in coming decades. As you may know, high energy physicists<br />
  are already expecting that we may soon gain the ability to probe the fabric<br />
  of the multiverse via the creation of so-called &quot;extreme black holes&quot;<br />
  of microscopic size in the laboratory (e.g., CERN&#8217;s Large Hadron Collider),<br />
  possibly even within the next decade. At the same time, black hole analogs for<br />
  capturing light, electrons, and other quanta are also in the planning stages.<br />
  With regard to microcosmic reality, I find that truth is always more interesting<br />
  than fiction, and often less believable, at first blush.</p>
<p>Using various<br />
  forms of the above model,<b> James N. Gardner, Bela Balasz, Ed Harrison</b>,<br />
  <b>myself</b>, and a handful of others have proposed that our human intelligence<br />
  may play a central role in the universal replication cycle. In the paradigm<br />
  of evolutionary development, that would make our own emergenceâ€”but not our evolutionary<br />
  complexitiesâ€”developmentally tuned, via many previous cycles, into our universal<br />
  genes. </p>
<p>This gene-parameter<br />
  analogy is quite powerful. You wouldn&#8217;t say that any reasonable amount of your<br />
  adult complexity is contained in the paltry 20,000-30,000 genes that created<br />
  you. In fact the developmental genes that <i>really</i> created you are a small<br />
  subset of those, numbering perhaps in the <i>hundreds</i>. These genes don&#8217;t<br />
  specify most of the complexity contained in the 100 trillion connections in<br />
  your brain. They are merely developmental guides. Like the rules of a low-dimensional<br />
  cellular automata, they control the <i>envelope boundaries of the evolutionary<br />
  processes</i> that created you. So it may be with the 20-60 known or suspected<br />
  physical parameters and coupling constants underlying the Standard Model of<br />
  physics, the parameters that guided the Big Bang. They are perhaps best seen<br />
  as developmental guides, determining a large number of emergent features, but<br />
  never specifying the evolution that occurs within the unfolding system. </p>
<p>As anthropic cosmologists<br />
  (those who suspect the universe is specifically structured to create life) are<br />
  discovering, a number of our universal parameters (e.g., the gravitational constant,<br />
  the fine structure constant, the mass of the electron, etc.) appear to be very<br />
  finely tuned to create a universe that must develop life. As cosmology delves<br />
  further into M-Theory, anthropic issues are intensifying, not subsiding. Some<br />
  theorists, such as <b>Leonard Susskind</b>, have estimated that there are an<br />
  incredibly large number of <a href="http://arxiv.org/pdf/hep-th/0302219">string<br />
  theory vacua</a> from which our particular universal parameters were somehow<br />
  specified to emerge. </p>
<p>If you wish to<br />
  understand just how powerful developmental forces are, think not only of <b>Stephen<br />
  Jay Gould&#8217;s</b> &quot;<i><a<br />
href="http://www.amazon.com/exec/obidos/tg/detail/-/0393308197/">Panda&#8217;s Thumb</a></i>&quot;<br />
  1992, which provides an orthodox explanation of evolutionary process, but think<br />
  also of what I call &quot;The Twin&#8217;s Thumbprints,&quot; an example that explains<br />
  not evolution, but the more fundamental paradigm of evolutionary development.<br />
  Look closely at two genetically identical human twins, and tell me what you<br />
  see. </p>
<p>Virtually all<br />
  the complexity of these twins at the molecular and cellular scale has been randomly,<br />
  chaotically, evolutionarily constructed. Their fingerprints, cellular microachitecture<br />
  (including neural connections), and thoughts are entirely different. Yet they<br />
  look similar, age similarly, and even have 40-60% correlation in personality,<br />
  as several studies of separated twins have shown. That is an amazing level of<br />
  nonrandom convergence to tune into such simple initial parameters. Both twins<br />
  predictably go into puberty thirteen years later, after a virtually endless<br />
  period involving astronomical numbers of interactions at the molecular scale.
  </p>
<p>So it apparently<br />
  is with our own universe&#8217;s puberty, which occurred about 12.7 billion years<br />
  after the Big Bang, about 1 billion years ago. Earth&#8217;s intelligence is apparently<br />
  one of hundreds of billions of ovulating, self-fertilizing seeds in our universe,<br />
  one that is about to transcend into inner space very soon in cosmologic time.</p>
<p>One of the testable<br />
  conclusions of the developmental singularity hypothesis is that the parametric<br />
  settings for our universe are carefully tuned to support not simply the statistical<br />
  emergence of complex chemistry and occasional life, but a generalized relentless<br />
  MEST compression of computational systems in a process of accelerating hierarchical<br />
  emergence, a process that must develop accelerating local intelligence, interdependence,<br />
  and immunity (resiliency) on virtually all of the billions of planets in this<br />
  universe that are capable of supporting life for billions of years. This life<br />
  in turn is very likely to develop a technological singularity, and in some cosmologically<br />
  brief time afterward, to follow a constrained trajectory of universal transcension.
  </p>
<p>Most likely, this<br />
  transition leads to a subsequent restart of the developmental cycle, which would<br />
  provide the most parsimonious explanation yet advanced for how the special parameters<br />
  of our universe came to be. As with living systems, these parameters were apparently<br />
  self-organized, over many successive cycles, not instantiated by some entity<br />
  standing outside the cycle, but influenced incrementally by the intelligence<br />
  arising within it. In this paradigm, developmental failures are always possible.<br />
  But curiously, they are rarer, in a statistical sense, the longer any developmental<br />
  process successfully proceeds. Just look at the data for spontaneous abortions<br />
  in human beings, which are increasingly rare after the first trimester, to see<br />
  one obvious example.</p>
<p>But even if all<br />
  this speculation is true, we must realize that this says little about our evolutionary<br />
  role. Remember, life greatly cherishes variation. There is probably a very deep<br />
  computational reason why there are six billion discrete human beings on the<br />
  planet right now, rather than one unitary multimind. Consider that every one<br />
  of the developmental intelligences in this universe is, right now, taking its<br />
  own unique path down the rabbit hole, and they are all separated by vast distances,<br />
  planted very widely in the field, so to speak, to carefully preserve all that<br />
  useful evolutionary variation. I find that quite interesting and encouraging.<br />
  Free will, or the protected randomness of evolutionary search at the &quot;unbounded<br />
  edge&quot; between chaos and control in complex systems, always seems to be<br />
  central to the cycle at every scale in universal systems.</p>
<p>Now it is appropriate<br />
  to consider another commonly-asked question with regard to these dynamics. How<br />
  likely is it, by becoming aware of a cosmic replication cycle and our apparent<br />
  role in it, that we might alter the cycle to any appreciable degree? </p>
<p>To answer this,<br />
  it may also be helpful to realize that complex adaptive systems are always aware<br />
  that many elements of their world are constrained to operate in cycles (day/night,<br />
  wake/sleep, life/death, etc.). So it&#8217;s only an extension of prior historical<br />
  insight if we soon discover that our universe is also constrained to function<br />
  in the same manner. It may help to remember that long before human society had<br />
  theories of progress (after the 1650&#8242;s), and of accelerating progress (after<br />
  the singularity hypothesis, beginning in the 1900&#8242;s), cyclic cosmologies and<br />
  theories of social change were the norm. Even a mating salmon is probably very<br />
  aware of his own impending demise in the cycle of life. They certainly expend<br />
  their energy in ways that are entirely purposeful in that regard. </p>
<p>But awareness of a cycle, in any of these or other examples, does not allow<br />
  us to escape it. Or if we think we do, as in the transferring our biological<br />
  bodies to cybernetic systems to avoid biological death, we will likely discover<br />
  that the same life/death cycles continues to operate that the scale that we<br />
  hold most dear, which at that time will no longer be our physical bodies, but<br />
  the realm of our higher thoughts, perennially struggling in algorithmic cycles<br />
  of evolutionary development, death and life, erasure and reconstitution. As<br />
  personal development theorist <b>Stephen Covey</b> (<i><a<br />
href="http://www.amazon.com/exec/obidos/tg/detail/-/0671708635/">Seven Habits<br />
  of Highly Effective People</a></i>, 1990) is fond of saying, you cannot break<br />
  fundamental principles, or laws of nature. You can only break yourself against<br />
  them, if you so choose. So it is that I don&#8217;t have any expectation that our<br />
  local intelligence could be successful in escaping the cosmic replication cycle.<br />
  I think that insight is valuable for predicting several aspects of the shape<br />
  of the future.Â  </p>
<p>For example, every<br />
  scenario that has ever been written about humans &quot;escaping to the stars&quot;<br />
  ignores the accelerating intelligence that would occur onboard the ship. Such<br />
  civilizations must lead, in a very short time, to technological singularities<br />
  and, in the developmental singularity hypothesis, to universal transcension.<br />
  As <b>Vernor Vinge</b> says, it is very hard to &quot;write past the singularity,&quot;<br />
  and in this regard he has referred both to technological and developmental types.</p>
<p>Alternative scenarios<br />
  of constructing signal beacons, or nonliving, fixed-intelligence robotic probes<br />
  to spread an <i><a<br />
href="http://en2.wikipedia.org/wiki/Encyclopedia_Galactica">Encyclopedia Galactica</a>,</i><br />
  as <b>Carl Sagan</b> once proposed, ignore the massive reduction in evolutionary<br />
  variation that would result. This strategy would effectively turn that corner<br />
  of the galaxy into an evolutionarily sterile monoculture, condemning all intelligent<br />
  civilizations in the area to go down the hole in the same way we did, and all<br />
  developmental singularities in the vicinity to be of the same type. If I am<br />
  right, our information theory will soon be able to conclusively prove that all<br />
  such one-way communications can only reduce total universal complexity, and<br />
  are to be scrupulously avoided.</p>
<p>In conclusion,<br />
  I don&#8217;t think we can get around cyclic laws of nature, once we discover them.<br />
  But they can give us deep insight into how to spend our lives, how to surf the<br />
  tidal waves of accelerating change toward a more humanizing, individually unique,<br />
  and empowering future.</p>
<p>Much of this sounds<br />
  quite fantastical, so let me remind you that these are speculative hypotheses.<br />
  They will stand or fall based on much more careful scientific investigation<br />
  in coming years. Attracting that investigation is one of the goals of our organization.</p>
<p><b>If, as Ray Kurzweil has suggested, intelligence is developing on its own<br />
  trajectory&#8212;first in a biological substrate and now in computers&#8212;is there an inevitability to the singularity that makes speculating<br />
  about it superfluous? Is there really anything we can do about it one way or<br />
  the other?</b></p>
<p>Certainly you<br />
  can&#8217;t uninvent math, or electricity, or computers, or the internet, or RFID,<br />
  once they arrive on the scene. Anyone who looks closely notices a surprising<br />
  developmental stability and irreversibility to the acceleration.</p>
<p>But we must remember<br />
  that developmental events are only &quot;statistically deterministic.&quot;<br />
  They often occur with high probability, but only when the environment is appropriate.<br />
  Developmental failure, delay, and less commonly, acceleration can also occur.
  </p>
<p>Speaking optimistically,<br />
  I strongly suspect that there is little we could do to abort the singularity,<br />
  at this very late stage in its cosmic development. It appears to me that that<br />
  we live in a &quot;Child Proof Universe,&quot; one that has apparently self-organized,<br />
  over many successive cycles, to keep many of the worst destructive capacities<br />
  out of the hands of impulsive children like us. </p>
<p>This is a controversial<br />
  topic, so I will mention it only briefly, but suffice it to say that after extensive<br />
  research I have concluded that no biological or nuclear destructive technologies<br />
  that we can presently access, either as individuals or as nations, could ever<br />
  scale up to &quot;species killer&quot; levels. All of them are sharply limited<br />
  in their destructive effect, either by our far more complex, varied, and overpowering<br />
  immune systems, in the biological case, or by intrinsic physical limitsâ€”combinatorial<br />
  explosion of complexity in designing multistage fission-fusion devicesâ€”in the<br />
  nuclear weapons case. These destructive limits may exist for reasons of deep<br />
  universal design. A universe that allowed impulsive hominids like us an intelligence-killing<br />
  destructive power wouldn&#8217;t propagate very far along the timeline.</p>
<p>Speaking pessimistically,<br />
  I&#8217;m sure we could do quite a bit to delay the transition, by fostering a series<br />
  of poorly immunized catastrophes. If events take an unfortunate and unforsighted<br />
  turn, our planet might suffer the death of a few million human beings at the<br />
  hands of poorly secured and monitored destructive technologies, perhaps even<br />
  tens of millions, in the worst of the credible terrorist scenarios. But I am<br />
  of the strong opinion that we will never again see the 170 million deaths, due<br />
  to warfare and political repression, that occurred during the 20<sup>th</sup><br />
  century. See <b>Zbignew Brezinski&#8217;s</b> <i><a<br />
href="http://www.amazon.com/exec/obidos/tg/detail/-/0684826364/">Out of Control</a></i>,<br />
  1995, for an insightful accounting of the excesses of that now fortunately bygone<br />
  era. We are on the sharply downsloping side of the global fatality curve, and<br />
  we can thank information and communications technologies for that, more than<br />
  any other single factor in the world.</p>
<p>Today, we live<br />
  in the era of instant news, electronic intelligence and violence that is increasingly<br />
  surgically minimized, by an increasingly global consensus. Even with our primitive,<br />
  clunky, first generation internet and planetary communications grid, I feel<br />
  our planet&#8217;s technological immune systems have become far too strong and pluralistic,<br />
  or network-like, for the scale of political atrocities of the twentieth century<br />
  to ever recur. Yet conflict and exploitation will continue to occur, and we<br />
  could certainly choose a dirty, self-centered, nonsustainable, environmentally<br />
  unsound approach to the singularity. Catastrophes can and will continue to recur.<br />
  I hope for all our sakes that they are minimized, and that we learn from them<br />
  as rapidly and thoroughly as possible.</p>
<p>Unlike a small<br />
  minority of aggressive transhumanists, I applaud the efforts we are making to<br />
  create a more ecologically sustainable, carefully regulated world of science<br />
  and technology. Wherever we can inject values, sensitivity, accountability into<br />
  our sociotechnological systems, I think that is a wonderful thing. I&#8217;d love<br />
  to see the U.S. take a greener path to technology development, the way several<br />
  countries in Europe have. I&#8217;m also pragmatic in realizing that most social changes<br />
  we make will be more for our own peace of mind, and would have little effect<br />
  on the intrinsic speed of our global sci-tech advances, on the rate of the increasingly<br />
  human-independent learning going on in the ICT architectures all around us.</p>
<p>I consider such<br />
  moves to be more reflections on how we walk the path, choices that will in most<br />
  cases do very little to delay the transition. I also do not think it is valuable<br />
  to hold the perspective that we should get to the singularity as fast as we<br />
  can, if that path would be anything other than a fully democratic course. There<br />
  are many fates worse than death, as all those who have freely chosen to die<br />
  for a cause have realized over the centuries. There are many examples of acceleration<br />
  that come at unacceptable cost, as we have seen in the worst political excesses<br />
  of the twentieth century. No one of us has a privileged value set.</p>
<p>So perhaps most<br />
  importantly, we need to remember that the evolutionary path is what we control,<br />
  not the developmental destination. That&#8217;s the essence of our daily moral choice,<br />
  our personal and collective freedom. We could chart a very nasty, dirty, violent,<br />
  and exploitative path to the singularity. Or with good foresight, accountability,<br />
  and self-restraint, we could take a much more humanizing course. I am a cautious<br />
  optimist in that regard.</p>
<p><b>Christine Peterson recently told me that artificial intelligence represents<br />
  the one future development about which she has the most apprehension. It can<br />
  come the closest of any scenario to Bill Joy&#8217;s &quot;the future that doesn&#8217;t<br />
  need us.&quot; If the coming of the singularity means the ascendancy of machine<br />
  intelligence and the end of the human era, shouldn&#8217;t we all be doing what we<br />
  can to <i>prevent </i>it from happening?</b></p>
<p>Ah yes, the Evil Killer Robots scenario.Â  Some of my very clever transhumanist<br />
  colleagues worry quite a bit about &quot;Friendly AI.&quot; I&#8217;m glad to have<br />
  friends that are carefully exploring this issue, but from my perspective their<br />
  worries seem both premature and cautiously overstated. I strongly suspect that<br />
  A.I.s, by virtue of having far greater learning ability than us, will be, must<br />
  be, far more ethical than us. That is because I consider ethics to be an emergent<br />
  computational interdependence, a mathematics of morality, a calculus of civilization<br />
  that is invariably discovered by all complex adaptive systems that function<br />
  as collectives. And anything worthy of being called intelligent always functions<br />
  as a collective, including your own brain. Today&#8217;s cognitive scientists are<br />
  discovering the evolutionary ethics that have become self-encoded in all known<br />
  complex living systems, from octopi to orangutans, from guppies to gangsters.<br />
  For more on this intriguing perspective, see such works as <b>Robert Axelrod&#8217;s</b><br />
  <i><a<br />
href="http://www.amazon.com/exec/obidos/tg/detail/-/0465021212/">The Evolution<br />
  of Cooperation</a></i>, 1985, <b>Matt Ridley&#8217;s</b><br />
  <i><a<br />
href="http://www.amazon.com/exec/obidos/tg/detail/-/0140264450/">The Origins of<br />
  Virtue</a></i>, 1998, and <b>Robert Wright&#8217;s</b><br />
  <i><a<br />
href="http://www.amazon.com/exec/obidos/tg/detail/-/0679758941/">Non-Zero</a></i>,<br />
  2001.</p>
<p>This optimism<br />
  isn&#8217;t enough, of course. We humans had to go through a nasty, violent, and selfish<br />
  phase before we became today&#8217;s semi-civilized simians. How do we know computers<br />
  won&#8217;t have to do the same thing? I think the answer to this question is that<br />
  at one level, <b>Peterson&#8217;s</b> intuitions are probably right. Tomorrows partially-aware robotic systems and<br />
  A.I.s will have to go through a somewhat unfriendly, dangerous phase of &quot;insect<br />
  intelligence.&quot; As <b>Jeff<br />
  Goldblum</b> reminded us in <b>David Cronenberg&#8217;s</b>, <i><a href="http://www.amazon.com/exec/obidos/tg/detail/-/6305951454/">The<br />
  Fly</a></i>, insects are brutal, they don&#8217;t compromise, they don&#8217;t have compassion.<br />
  Their politics, as <b>E.O. Wilson&#8217;s</b><br />
  <i><a<br />
href="http://www.amazon.com/exec/obidos/tg/detail/-/0674002350/">Sociobiology</a></i>,<br />
  1975/2000 reminds us, are quite comfortable with brute force. That&#8217;s a potentially<br />
  dangerous developmental stage for an A.I. You wouldn&#8217;t want that kind of A.I.<br />
  running your ICU, or your defense grid. Or your nanoassembler machines. </p>
<p>But you would<br />
  very likely let such a system run the robotics in a manufacturing plant, especially<br />
  if evolutionary systems have proven, as they are already demonstrating today,<br />
  to be far more powerfully self-improving, self-correcting, and economical than<br />
  our top down, human-designed software systems. That plant, of course, would<br />
  be outfitted and embedded within a much larger matrix of technological fire<br />
  extinguishers, an immune system capable of easily putting out any small fires<br />
  that might develop.</p>
<p>But with a learning curve that is multi-millionfold faster than ours, I expect<br />
  that &quot;insect transition&quot; to last weeks or months, not years, for any<br />
  self-improving electronic evolutionary developmental system. You can be sure<br />
  these systems will be well watched over by a bevy of A.I. developers, and those<br />
  few catastrophes that occur to be carefully addressed by our cultural and technological<br />
  immune systems. It&#8217;s easy to underestimate the extent and effectiveness of immune<br />
  systems, they aren&#8217;t obvious or all that sexy, but they underlie every intelligent<br />
  system you can name. Computer scientist <b>Diana Gordon-Spears</b>and others<br />
  have already organized conferences on &quot;Safe Learning Agents,&quot; for<br />
  example, and we have only just begun to build world-modeling robotics. We&#8217;re<br />
  still several decades away from anything self-organizing at the hardware level,<br />
  anything that could be &quot;intentionally&quot; dangerous.</p>
<p>We also need<br />
  to remember that humans will be practicing artificial selection on tomorrow&#8217;s<br />
  electronic progeny. That is a very powerful tool, not so much for creating complexity,<br />
  but for pruning it, for ensuring symbiosis. We&#8217;ve had 10,000 years of artificial<br />
  selection on our dogs and cats. Their brain structures are black boxes to us,<br />
  and yet we find very few today that will try to grab human babies when the parents<br />
  are not looking. Again, those few that do are taken care of by immune systems<br />
  (we don&#8217;t continue to breed such animals, statistically speaking).</p>
<p>In short, I expect<br />
  human society will coexist with many decades of very partially aware AI&#8217;s, beginning<br />
  some time between 2020-2060, which will give us ample time to select for stable,<br />
  friendly, and <i>very</i> intimately integrated intelligent partners, for each<br />
  of us. <b>Hans Moravec</b><br />
  (<i><a<br />
href="http://www.amazon.com/exec/obidos/tg/detail/-/0195136306/">Robot</a></i>,<br />
  1999) has done some of the best writing in this area, but even he sometimes<br />
  underestimates the importance of the personalization that will be involved.<br />
  As a species, humanity would not let the singularity occur as rapidly as it<br />
  will without personally witnessing the accelerating usefulness of A.I. interacting<br />
  with us in all aspects of our lives, modeling us through our LUI systems, lifecams,<br />
  and other aspects of the emerging electronic ecology.</p>
<p>By contrast,<br />
  every scenario of &quot;fast takeoff&quot; or A.I. emergence that I&#8217;ve ever<br />
  seen, the heroic individual toiling away in the lab at night to create HAL-9000,<br />
  just doesn&#8217;t seem to understand the immense cycles of replication, variation,<br />
  interaction, selection, and convergence in evolutionary development that are<br />
  always required to create intelligence in both a bottom-up and top-down fashion.<br />
  Since the 1950s, almost all the really complex technologies we&#8217;ve created have<br />
  required teams, and there is presently nothing in technology that is as remotely<br />
  complex as a mammalian brain. </p>
<p>As I mention<br />
  on my website, I think we are going to have to see massively parallel hardware<br />
  systems, directed by some type of DNA-equivalent parametric hardware description<br />
  language, unfolding very large, hardware-encoded neural nets and testing them<br />
  against digital and real environments in very rapid evolutionary developmental<br />
  cycles, before we can tune up a semi-intelligent A.I. The transition will likely<br />
  require many teams of individuals and institutions, integrating bottom-up and<br />
  top-down approaches, and be primarily a hardware story, and only secondarily<br />
  a software story, for a number of reasons.</p>
<p><b>Bill Joy</b>, in <i><a<br />
href="http://www.wired.com/wired/archive/11.12/billjoy.html">Wired12.2003</a></i>,<br />
  notes that we can expect a 100X increase (6-7 doublings) in general hardware<br />
  performance over the next ten years, and a 10X increase in general software<br />
  (e.g., algorithmic) performance. While certain specialized areas, like computer<br />
  graphics chips may run faster (or slower), on average this sounds about right.<br />
  Note the order of magnitude difference in the two domains. Hardware has always<br />
  outstripped software because, as I&#8217;ve said earlier, it seems to be following<br />
  a developmental curve that is more human discovered than human created. It is<br />
  easier to discover latent efficiencies in hardware vs. software &quot;phase<br />
  space&quot;, because the search space is much more directed by the physics of<br />
  the microcosm. <b>Teuvo Kohonen</b>, one of the pioneers of neural networks,<br />
  tells me that he doesn&#8217;t expect the neural network field to come into maturity<br />
  until most of our nets are implemented in hardware, not software, a condition<br />
  we are still at least a decade or two away from attaining. </p>
<p>The central problem<br />
  is an economic one. No computer manufacturer can begin to explore how to create<br />
  biologically-inspired, massively parallel hardware architectures until our chips<br />
  stop their magic annual shrinking game and have become maximally-miniaturized<br />
  (within the dominant manufacturing paradigm) commodities. That isn&#8217;t expected<br />
  for at least another 15 years, so we&#8217;ve got a lot of time yet to think about<br />
  how we want to build these things. </p>
<p>If I&#8217;m right,<br />
  the first versions of really interesting A.I.s will likely emerge on redundant,<br />
  fault tolerant evolvable hardware &quot;Big Iron&quot; machines that take us<br />
  back to the 1950s in their form factor. Expect some of these computers to be<br />
  the size of buildings, tended by vast teams of digital gardeners. Dumbed-down<br />
  versions of the successful hardware nets will be grafted into our commercial<br />
  appliances and tools, mini-nets built on a partially reconfigurable architecture,<br />
  systems that will regularly upgrade themselves over the Net. But even in the<br />
  multi-millionfold faster electronic environment, a bottom-up process of evolutionary<br />
  development must still require decades, not days, to grow high-end A.I.. And<br />
  primarily top-down A.I. designs are just flat wrong, ignorant of how complexity<br />
  has always emerged in physical systems. Even all of human science, which some<br />
  consider the quintessential example of a rationally-guided architecture, has<br />
  been far more an inductive, serendipitous affair than a top-down, deductive<br />
  one, as <b>James Burke</b><br />
  (<i><a<br />
href="http://www.amazon.com/exec/obidos/tg/detail/-/0316116726/">Connections</a></i>,<br />
  1995) delights in reminding us.</p>
<p>So, when one<br />
  of the first generation laundry folding robots in 2030 folds your cat by accident,<br />
  we&#8217;ll learn a tremendous amount about how rapidly self-correcting these systems<br />
  are, how quickly, with minor top-down controls and internet updates, we can<br />
  help them to improve their increasingly bottom-up created brains. Unlike today&#8217;s<br />
  still-stupid cars, for example, which currently participate in 40,000 American<br />
  fatalities every year, tomorrows LUI-equipped, collision avoiding, autopiloting<br />
  vehicles will be increasingly human friendly and human protecting every year.<br />
  This encoded intelligence, this ability to ensure increasingly desirable outcomes,<br />
  is what makes a <a href="http://www.segway.com/">Segway</a> so fundamentally<br />
  different from a bicycle. Segway V, if it arrives, would put out a robotic hand<br />
  or an airbag to protect you from an unexpected fall. So it will be with your<br />
  PDA of 2050, but in a far more generalized sense.</p>
<p>In a related point,<br />
  I also wouldn&#8217;t worry too much about the loss of our humanity to the machines.<br />
  Evolution has shown that good ideas always get rediscovered. The eye, for example,<br />
  was discovered at least thirty times by some otherwise very divergent genetic<br />
  pathways. As <b>Simon Conway Morris</b> eloquently argues (<i><a<br />
href="http://www.amazon.com/exec/obidos/tg/detail/-/0521827043/">Life&#8217;s Solution</a></i>,<br />
  2003) every single aspect of our human-ness that we prize has already been independently<br />
  emulated to some degree, by the various &quot;nonhuman&quot; species we find<br />
  on this planet. Octopi are so smart, for example, that they build houses, and<br />
  learn complex behavior (e.g., jar-opening) from each other even when kept in<br />
  adjacent aquaria.</p>
<p>This leads us<br />
  to a somewhat startling realization. Even if, in the most abominably unlikely<br />
  of scenarios, all of humanity were snuffed out by a rogue A.I., from a developmentalist<br />
  perspective it seems overwhelmingly likely that good A.I.s would soon emerge<br />
  to recreate us. Probably not in the &quot;Christian rapture&quot; scenario envisioned<br />
  by transhumanist <b>Frank Tipler</b> in <i><a<br />
href="http://www.amazon.com/exec/obidos/tg/detail/-/0385467990/">The Physics of<br />
  Immortality</a></i>, 1997, but certainly our informational essence, all that<br />
  we <i>commonly</i> hold dear about ourselves. </p>
<p>How can we even<br />
  suspect this? Humanity today is doing everything it can to unearth all that<br />
  came before us. It is in the nature of all intelligence to want to deeply know<br />
  its lineage, not just from our perspective, but from the perspective of the<br />
  prior systems. If the world is based on physical causes, then in order to truly<br />
  know one understands the world, one must truly know, and be able to understand<br />
  at the deepest level, the systems in which one is embedded, the systems from<br />
  which one has emerged, in a continuum of developmental change. The past is always<br />
  far more computationally tractable than what lies ahead. </p>
<p>That curiosity<br />
  is a beautiful thing, as it holds us all tightly interdependent, one common<br />
  weave of the spacetime fabric, so to speak.</p>
<p>That&#8217;s why we<br />
  are already spending tens of millions of dollars a year trying to model the<br />
  way bacteria work, trying to predict, eventually in real-time, everything they<br />
  do before they even do it, so that we know we truly understand them. That&#8217;s<br />
  why emergent A.I. will do the same thing to us, permeating our bodies and brains<br />
  with its nanosensor grids, to be sure it fully understands its heritage. Only<br />
  then will we be ready to make the final transition from the flesh. </p>
<p><b>Also on your website, I read that the singularity will occur within the<br />
  next 40 to 120 years.Â  Isn&#8217;t that kind of broad range? What&#8217;s your best guess<br />
  on when it will occur?</b></p>
<p>I find that those<br />
  making singularity predictions can be usefully divided into <a<br />
href="http://www.singularitywatch.com/explore.html">three camps</a>: those predicting<br />
  near term (now to 2029), mid-term (2030-2080), and longer term (2081-2150+)<br />
  emergence of a generalized greater-than-human intelligence. Each group has somewhat<br />
  different demographics, which may be interesting from an anthropological perspective.
  </p>
<p>I think the range<br />
  is so broad because the future is inherently unpredictable and under our influence.<br />
  It is also true that none of us has yet developed a popular set of quantitative<br />
  methodologies for thinking rigorously about these things. Very little money<br />
  or attention has been given to them. If you&#8217;d like to send a donation to our<br />
  organization to help in that regard, let us know.</p>
]]></content:encoded>
			<wfw:commentRss>https://blog.speculist.com/speaking_of_the_future/riding-the-spir.html/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Riding the Spiral</title>
		<link>https://blog.speculist.com/speaking_of_the_future/riding-the-spir-1.html</link>
		<comments>https://blog.speculist.com/speaking_of_the_future/riding-the-spir-1.html#comments</comments>
		<pubDate>Thu, 04 Dec 2003 12:36:47 +0000</pubDate>
		<dc:creator>Phil Bowermaster</dc:creator>
				<category><![CDATA[Speaking of the Future]]></category>

		<guid isPermaLink="false">http://localhost/specblog/?p=14</guid>
		<description><![CDATA[Speaking of the Future with John Smart Consider this basic shape: I&#8217;ve always been fascinated by spirals. When I was a kid, I used to sit and draw them for hours at a time. This was long before I knew anything about Phi or the Fibonacci sequence, before I had ever heard of logarithmic spirals [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><b>Speaking of the Future with John Smart</b></p>
<blockquote>
<p>Consider this basic shape: </p>
<p align="center"><img src="https://www.aceofjustice.com/graphics/bigblkspiral.gif" width="167" height="105"></p>
<p>I&#8217;ve always been fascinated by spirals. When I was a kid, I used to sit and<br />
    draw them for hours at a time. This was long before I knew anything about<br />
    <a href="http://www.amazon.com/exec/obidos/ASIN/0767908155/104-1300867-2335969">Phi</a><br />
    or the <a href="http://www.mcs.surrey.ac.uk/Personal/R.Knott/Fibonacci/fib.html">Fibonacci<br />
    sequence</a>, before I had ever heard of logarithmic spirals or <a href="http://www.moonstar.com/%7Enedmay/chromat/fibonaci.htm">fractals</a>,<br />
    before I ever came to work for a company with such an aesthetically pleasing<br />
    <a href="http://www.sybase.com/home">logo</a>. I&#8217;ve never lost interest<br />
    in them. In fact, whether meaning to or not, I seem to fill my life with spirals.</p>
<p>My choice of employer was just the beginning.</p>
<p>Take a look at this ironwork that sits atop my bedroom mirror. It&#8217;s pretty<br />
    close to the shape in the line drawing above, although it stops short of being an actual<br />
    spiral.</p>
<p align="center"><img src="https://www.aceofjustice.com/graphics/mirror.jpg" width="152" height="66"></p>
<p>Here&#8217;s my coffee mug. Now this shape <i>is</i> a spiral, but it&#8217;s different from<br />
    the one shown above. It&#8217;s more &quot;practical,&quot; a squashed spiral that<br />
    will fit in a small space. </p>
<p align="center"><img src="https://www.aceofjustice.com/graphics/mug.jpg" width="108" height="82"></p>
<p>Here&#8217;s some original artwork, the basis for the Speculist logo. These spirals<br />
    are actually the same as the line drawing; it was the template I used to create my galaxy.</p>
<p align="center"><img src="https://www.aceofjustice.com/graphics//globe4.gif" width="150" height="150"></p>
<p>The truth is, whether I try to fill my life with it or not, that spiral is<br />
    everywhere. This simple shape, along with the math that underpins it, is encoded<br />
    into our universe. The sequence of numbers that produces it is simplicity<br />
    itself:</p>
<p align="center">1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987 </p>
<p>(To get the next number, you simply add the previous two.) </p>
<p>And yet from that simplicity comes immense and wonderful complexity. A nautilus<br />
    shell encodes that sequence to produce its spiral shape, as does a wave just<br />
    before it breaks on the shore. And, as I&#8217;ve shown above, the trillions of<br />
    stars making up a galaxy tend to follow the same sequence and produce the same<br />
    lovely spiral. There are many, many other examples.</p>
<p>And it may not just be physical objects that follow this sequence. John Smart,<br />
    Director of the <a href="http://www.singularitywatch.com/">Institute for Accelerating<br />
    Change</a>, has suggested that history, perhaps even time itself, may be driven<br />
    by <a href="http://www.singularitywatch.com/spiral.html">such a sequence</a>.<br />
    Following the sequence of events that make up history is, perhaps, not unlike<br />
    following the arc of a galactic spiral arm as it sweeps its way into the center.<br />
    Imagine such a trip: you start out moving slowly in nearly empty space, gaining<br />
    momentum as the turns begin to come more quickly and the frequency of the<br />
    stars increases; soon there are more stars and then more, and now you&#8217;re spiraling<br />
    in and in and in, to the incredibly hot, dense core&#8212;and then even <i>further<br />
    </i>in, to a place that&#8217;s beyond our ability to describe accurately, or really<br />
    even to imagine.</p>
<p>In the interview that follows, John Smart takes us on just such a journey<br />
    through time. The galaxy that we are travelling through is the history of<br />
    the universe itself; the turns in the spiral are the major developmental epochs;<br />
    the stars are the individual, evolutionary changes. Like a trip to the center<br />
    of the galaxy, this journey takes us, quite literally, beyond the limits of<br />
    the imagination. </p>
<p>You may be startled to realize (as I was) where exactly we are on that winding<br />
    path to the brink of the unknowable.</p>
</blockquote>
<p><a name="more"></a></p>
<p><b>Part I: Seven Questions About the Future</b></p>
<p><b>1. The present is the future relative to the past. What&#8217;s the best thing<br />
  about living here in the future?</b></p>
<p><a name="7q></a><b>1. The present is the future relative to the past. Whatâ€™s the best thing<br />
  about living here in the future? </b></p>
<p>Let me begin with one of several books I&#8217;ll be recommending for your browsing<br />
  pleasure. As <a<br />
href="http://www.cato.org/">Cato Institute</a> authors <b>Julian Simon</b> and<br />
  <b>Stephen Moore</b> noted in 2000, <i><a<br />
href="http://www.amazon.com/exec/obidos/tg/detail/-/1882577973/">It&#8217;s Getting<br />
  Better All the Time</a></i>. Not only that, but things are getting better by<br />
  a greater absolute amount each year, with the exception of very few remaining<br />
  parts of the developing world. And improving conditions in the developing world<br />
  is something we also have more ability to do today than ever before.</p>
<p>This amazing<br />
  state of affairs is due almost entirely to advances in science and technology,<br />
  and the profoundly civilizing way that these subjects interact with the half-bald<br />
  primates that have discovered them and who are now feverishly employing them<br />
  at every level of human endeavor it on this precious little planet. </p>
<p>Looking at the<br />
  same process from the informational side (sometimes called the metaphysical<br />
  side), the powerful transformations we are witnessing are also due to what the<br />
  transhumanist mystic <b>Teilhard de Chardin</b><br />
  (<i><a<br />
href="http://www.amazon.com/exec/obidos/tg/detail/-/006090495X/">The Phenomenon<br />
  of Man</a></i>, 1955) called &quot;psychical energy&quot;, the accelerating<br />
  forces of conscious intelligence, loving interdependence, and resilient immunity,<br />
  the holistic, informational yang to the reductionist, atomistic yin of sci-tech.
  </p>
<p>I think we are<br />
  beginning to recognize the importance of both the &quot;psychical&quot;/informational<br />
  and the physical/material in every complex system, what <b>John Archibald Wheeler</b><br />
  calls the increasingly aware &quot;it&quot; that emerges from all our quantum<br />
  &quot;bits.&quot;</p>
<p><b>2. Whatâ€™s the biggest disappointment?<br />
  </b></p>
<p>The U.S. has<br />
  been the world&#8217;s technological leader since the invention of the &quot;American<br />
  System&quot; of mass production and interchangeable parts in the 1910&#8242;s. But<br />
  we&#8217;ve fallen away from a clear leadership position in several areas of science<br />
  and technology in recent decades, and I think the world is poorer for it. </p>
<p>Ask yourself:<br />
  what is the single greatest goal currently unifying our national efforts in<br />
  science and technology? I don&#8217;t have a clear answer to that question, and I<br />
  think there should always be one, or at least a very small handful. </p>
<p>Stopping terrorism<br />
  one of today&#8217;s admirable, timely, and necessary great goals. And there are certainly<br />
  effective technological immune systems that we will develop around this goal<br />
  in coming years. But this is a reactive, not a proactive program. We aren&#8217;t<br />
  presently rallying the country around a positive, <a<br />
href="http://www.nonzero.org/">non-zero sum</a> developmental vision. Nanotechnology<br />
  is a candidate, but as I will describe later, it cannot yet fire the public<br />
  imagination the way more achievable, short-term goals can. Where&#8217;s the leadership<br />
  we need?</p>
<p>We&#8217;ve had some<br />
  effective great goals in the past. <b>John F. Kennedy&#8217;s</b><br />
  Space Program most readily comes to mind. The infrastructure projects of <b>Franklin<br />
  Roosevelt&#8217;s</b> New Deal were at least a partial success, if economically mixed. Even <b>Lyndon Johnson&#8217;s</b><br />
  War on Poverty made some measurable progress. </p>
<p>Why is the Moon<br />
  Shot the great goal we all most clearly identify? Scientific and technological<br />
  goals, if chosen wisely, can have both dramatic consequences and clear deliverables,<br />
  unlike many of our social, economic, and political objectives. At best, a great<br />
  goal is both vitally important and demonstrably achievable. At worst, as with<br />
  the Wars on Cancer, or Drugs, or Inner City Violence, the putative great goal<br />
  diverts our energies and vision from more critical priorities. Alternatively,<br />
  a vitally important goal may be too ambitious to achieve within one generation,<br />
  like WMD Nonproliferation, which has been measurably improved by every president<br />
  since Kennedy. Alternative energy development, greenhouse gas reduction, and<br />
  a host of other goals fall into this latter category.</p>
<p>Worthy as they<br />
  are, these types of goals deserve to remain on the second tier of the public<br />
  consciousness. Only the most important, urgent, and achievable goals deserve<br />
  to be named as our top priorities. I would also argue strongly that if we live<br />
  in a time when we can&#8217;t find those, then the country&#8217;s direction drifts, noise<br />
  exceeds signal, and political apathy becomes the norm.</p>
<p>So what is the<br />
  great goal our country is currently ignoring? It&#8217;s definitely not space exploration,<br />
  as I argue later in this interview. That era is over for all but our robotic<br />
  progeny, and even they will only be sending out a small number of &quot;Eyes<br />
  in the Sky&quot; to relay back what little we still don&#8217;t understand about the<br />
  simplistic historical cosmologies that have led to our astounding local complexity.</p>
<p>No, the real<br />
  acceleration today is the creation of inner space, not the exploration of outer<br />
  space. The trajectory of intelligence development has always been toward increasingly<br />
  local, increasingly Matter-, Energy-, Space-, and Time-compressed (&quot;MEST-compressed&quot;)<br />
  computational domains, and there is nothing on the horizon that suggests we<br />
  will begin to violate that. Indeed, all signs point toward a world of greater<br />
  energy densities of local computation, as I will discuss later. Science and<br />
  technology remain the key story in this transformation, as they has since the<br />
  birth of our nation, and anyone who looks carefully will tell you that Information<br />
  and Communication Technologies (ICT) are the central drivers of all scientific<br />
  and technologic change. </p>
<p>Major changes<br />
  are afoot. We are creating a virtual or simulated world, one that will soon<br />
  be far richer and more productive than the physical world it augments. At the<br />
  same time, humanity is becoming intimately connected to and symbiotically captured<br />
  within our accelerating digital ecology. While many elements of our individuality<br />
  are flowering, many others are necessarily atrophying through disuse. This gives<br />
  us pause. Many of today&#8217;s first world humans no longer know how to grow and<br />
  prepare food (due to automated food production), how to repair many of our most<br />
  basic tools and technologies (due to automated manufacture and specialized service<br />
  for complex systems) how to do arithmetic by hand (due to ubiquitous digital<br />
  calculators), how to read with the level of their parents (due to our media-based<br />
  culture) or even how to read a map (due to GPS). Yet these atrophies are natural<br />
  and predictable, in the same way our Australopithecine sense of smell rapidly<br />
  declined once we began forming social structures, applying ourselves to more<br />
  sophisticated network-based modes of computation (for more on this, see <b>Carl Zimmer&#8217;s</b><br />
  wonderful &quot;<a<br />
href="http://www.carlzimmer.com/articles/PDF/NoseGenes2002.pdf">The Rise and Fall<br />
  of the Nasal Empire</a>,&quot; <i>Natural History</i>, June 2002). Our ever-more-stimulated<br />
  cortex continues to expand, not shrink, in this developmental process. Our finite,<br />
  precious set of cognitive modules are always repurposed for higher level activity,<br />
  the way Wernicke&#8217;s and Broca&#8217;s areas emerged once humans began using the technology<br />
  of speech (see <b>Terrence<br />
  Deacon&#8217;s</b> <i><a<br />
href="http://www.amazon.com/exec/obidos/tg/detail/-/0393317544/">The Symbolic<br />
  Species</a></i>, 1998). Once again, we humans are becoming nodes in larger networks,<br />
  this time on national and global scales, involving technological processes far<br />
  faster, more flexible, and more permanent than the biological domain. </p>
<p>To my mind, the<br />
  last century&#8217;s accelerations were driven most significantly by human discovery<br />
  within the technological hardware and materials science space (and to a much<br />
  smaller extent, algorithmic discovery in software). In other words, this process<br />
  has apparently been guided by the special, preexisting, computation-accelerating<br />
  physics of the microcosm, a very curious feature of the universe we inhabit,<br />
  as long noted by <b>Richard Feynman, Carver Mead</b>, and several other physical theorists and experimentalists. Secondarily, the<br />
  advances we have seen have also been driven by human initiative and creativity<br />
  in all domains, and by the quality of choices we have made in scientific and<br />
  technological development. We must move beyond our pride to realize that human<br />
  creativity has played a supporting role to human discovery in this process,<br />
  but when we do I think great insight can emerge.</p>
<p>Where the clock, the telegraph, the engine, the telephone, the nuclear chain<br />
  reaction, and the television were organizing metaphors for other times,Â  the<br />
  internet has become the metaphor for ours. It is the central catalyst of human<br />
  and technological computation for our generation, the leading edge of the present<br />
  developmental process of accelerating change. The internet, growing before our<br />
  eyes, will soon become planetnet, a system so rich, ubiquitous, and natural<br />
  to use that it will be a semi-intelligent extension of ourselves, available<br />
  to us at every point on this sliver of surface, between magma and vacuum, that<br />
  we call home. That will be very empowering and liberating, and at the same time,<br />
  civilizing. The human biology doesn&#8217;t change, but we are creating an intelligent<br />
  house for the impulsive human of almost unimaginable subtlety and sophistication.</p>
<p>All this said,<br />
  our goals should try to reflect these natural developmental processes as much<br />
  as our collective awareness will allow. It is my contention that the internet<br />
  is territory within which our most achievable and important current great goals<br />
  lie.</p>
<p>A number of technologists<br />
  have proposed that there are two main bottlenecks to the internet&#8217;s impending<br />
  transformation into a permanent, symbiotic appendage to the average citizen.<br />
  The first is the lack of ubiquitous affordable always on, always accessible<br />
  broadband connectivity for all users, and the second is the current necessity<br />
  of a keyboard-dependent interface for the average user&#8217;s average interaction<br />
  with the system. </p>
<p>In other words,<br />
  developing cheap, fat data pipes, both wired and wireless, and a growing set<br />
  of useful <a href="http://www.singularitywatch.com/lui.html">Linguistic User<br />
  Interfaces</a> (LUIs) are obvious candidates for our nation&#8217;s greatest near<br />
  term ICT developmental challenges. Just like the transcontinental railroad was<br />
  a great goal of the late 1800&#8242;s, getting affordable broadband to everyone in<br />
  this country by 2010, and a <a<br />
href="http://www.singularitywatch.com/promontorypoint.html">first generation LUI<br />
  by 2015</a> appear to be the greatest unsung goals of our generation. Now we<br />
  just need our national, international, and institutional leaders to start singing<br />
  this song, in unison.</p>
<p>This is a truly<br />
  global transformation, one dwarfing everything else on the near-term horizon.<br />
  It is such a planetary issue, in fact, that given the unprecedented human productivities<br />
  that are already being unleashed by internet-aided manufacturing and services<br />
  globalization since the mid 1990&#8242;s, a strong case can be made that we might<br />
  economically benefit more in the U.S., even today, by getting greater broadband<br />
  penetration first not to our own citizens, but to the youth of a number of trade-oriented,<br />
  pro-capitalist countries in the developing world! Unfortunately that level of<br />
  globally aware, self-interested prioritization is not yet politically salable<br />
  as a great goal to be funded by U.S. tax dollars. But I predict that it increasingly<br />
  will be, in a world that already pools its development dollars for a surprising<br />
  number of transnational projects. At any rate, we can at least push for accelerated<br />
  efforts in international technology transfer in internet related areas, concurrent<br />
  with our domestic agenda. </p>
<p>If you&#8217;ve never<br />
  heard of a LUI before, take a browse through the links above. Your father used<br />
  a TUI (text-based user interface). You use a GUI (graphical user interface).<br />
  Your kid will primarily use a LUI (voice-driven interface) to speak to the computers<br />
  embedded in every technology in her environment. She&#8217;ll continue to use TUIs<br />
  and GUIs, but only secondarily, not for her typical, average interaction with<br />
  a machine. Your grandchildren will use a NUI (neural user interface), a biologically-inspired,<br />
  self-improving, very impressive set of machines. More on that later.</p>
<p>Declaring broadband<br />
  and LUI as great goals needs to be differentiated from the much-hyped &quot;Fourth<br />
  Generation&quot; AI project, that 1980&#8242;s great goal in Japan, that predictably<br />
  failed in the 1990&#8242;s. General artificial intelligence, a general purpose NUI,<br />
  is much too hard a national goal to declare today. So is the development of<br />
  a molecular assembler, or a computational nanocell/molectronic fabrication system<br />
  for nanotechnology by 2020, as powerful as such devices will eventually become.<br />
  <b>Christine Peterson</b><br />
  of the <a<br />
href="http://www.foresight.org/">Foresight Institute</a> has even stated that<br />
  a nanotech great goal, at least in the form of a Manhattan Project for molecular<br />
  nanotechnology, would be premature today. It is my opinion that the National<br />
  Nanotechnology Initiative, perhaps our current leading candidate for a great<br />
  technology goal, has already provided a commendable and unprecedented level<br />
  of funding to this worthy field for the present time. Now we need to see a Broadband<br />
  and LUI Initiative with some very challenging five, ten, fifteen, and twenty<br />
  year goals set. </p>
<p>Broadband and<br />
  basic LUIs everywhere within a generation would throw gasoline on the fire of<br />
  human innovation. This level of internet would link all our wisest minds, including<br />
  even those elders who little use computers today, into one real-time community.<br />
  It would accelerate our nation and more importantly, the entire planet even<br />
  more than the transcontinental railroad, which compressed coast-to-coast travel<br />
  time from six months to six days. Maximal broadband penetration plus an incrementally<br />
  more powerful and useful LUI is a dramatic and achievable objective for the<br />
  United States over the next twenty years. IBM technologist <b>John<br />
  Patrick</b> in his insightful <i><a<br />
href="http://www.amazon.com/exec/obidos/tg/detail/-/0738205133/">Net Attitude</a></i>,<br />
  2001, has broadly described the challenges of a Next Generation Internet. But<br />
  even Patrick does not properly emphasize the central importance of incorporating<br />
  natural language processing (NLP) systems as early and broadly as practical.<br />
  Developing a functional LUI is a great goal whose progress we could measure<br />
  each year forward, something we can also catalyze worldwide as others emulate<br />
  our leadership in the emerging digital community. </p>
<p>Of course, if<br />
  we don&#8217;t declare this goal, natural technological developmental processes will<br />
  likely eventually deliver it for us anyway. Perhaps first to other nations,<br />
  and then eventually, to us. So why bother? Because if we see it, and have the<br />
  courage to declare it and strive for it, there are at least two major benefits<br />
  we can reap. </p>
<p>The first benefit<br />
  will be a measure of developmental acceleration. Even with the inefficiencies<br />
  of large government, a billion dollar a year program of public targeted grants,<br />
  with private matching funds and excellent public relations to get everyone on<br />
  this bandwagon, might accelerate the emergence of a functional LUI by a decade.<br />
  That would likely be the best spent money in our entire R&amp;D budget. </p>
<p>A less politically<br />
  likely but still plausible &quot;Open Manhattan Project,&quot; involving a number<br />
  of competing centers and a multi-billion dollar annual public-private commitment,<br />
  might accelerate the LUI by twice this amount. Many of my computer scientist<br />
  colleagues, knowing the inchoate state of the field today, think that developing<br />
  and deploying a LUI powerful enough to be used by most people for most of their<br />
  daily computer interactions by 2020 is a very challenging vision. Developing<br />
  functional natural language processing with complex semantics is a very hard<br />
  problem, one we have been experimenting with for fifty years, but one that also<br />
  benefits greatly from scale and parallelism, two strategies that are increasingly<br />
  affordable each year. </p>
<p>It is true that<br />
  other countries will take up our slack to a certain degree if we drop the ball,<br />
  but we must realize that an international race has not yet even begun in earnest,<br />
  as national leadership has not yet materialized on this issue. Transnational<br />
  network development institutions like the <a<br />
href="http://www.itu.int/ITU-D/conferences/wtdc/2002/brochure/who_what_where_why.html">ITU</a><br />
  are wonderful starts, but it will take a leading nation stepping boldly into<br />
  the breech to accelerate the world&#8217;s response to this issue. For a valuable<br />
  comparison, the roughly six billion dollar annual worldwide funding that exists<br />
  today in nanotechnology (grossly, 1 billion public, 1 billion private in the<br />
  U.S., Europe, and Asia) was greatly accelerated by the United States&#8217; public<br />
  multiyear leadership on the National Nanotechnology Initiative, proposed to<br />
  the White House by <b>Mike Roco</b><br />
  in 1999, at a level of half a billion dollars annually, and funded beginning<br />
  in 2001. </p>
<p>The longer we<br />
  choose not to declare broadband and the LUI as developmental goals and support<br />
  them with escalating innovation and consistent funding, the longer we delay<br />
  their arrival.</p>
<p>The second benefit<br />
  of declaring this goal, better collective foresight, may be even more important<br />
  than the time we save. By declaring good developmental goals early on, we learn<br />
  to see the world as the information processing system that it really is, not<br />
  simply as the collection of human-centric dramas we often fancy it to be. With<br />
  this new insight we begin to look for ways to catalyze the beneficial accelerations<br />
  occurring in almost all of our technologies, and ways to block the harmful ones<br />
  long enough for overpowering immune systems to mature. And we discover the common<br />
  infrastructures upon which so many of our goals converge. </p>
<p>For example,<br />
  just about all of our cherished social goals seem dependent on the quality and<br />
  quantity of information getting to the individual. You can&#8217;t fix an antiquated,<br />
  politically deadlocked educational system, for example, without a functional<br />
  LUI, which would educate the world&#8217;s children in ways no human ever could. You<br />
  can&#8217;t create a broadly accessible or useful health care system. Or security<br />
  system.</p>
<p>Computer networks,<br />
  through the humans they connect and the social and digital ecologies they foster,<br />
  will soon educate human beings to be good citizens far better than any of today&#8217;s<br />
  pedagogical systems ever could. They will make us more productive, day by day,<br />
  than we ever dreamed we could be. I think it&#8217;s time to move beyond our hubris<br />
  and acknowledge the human-surpassing transformations taking place. If we don&#8217;t,<br />
  other countries will take the lead. Look to China, whose technological revolution<br />
  is now well under way, or even to India, who recently declared a <a href="http://www.spacedaily.com/2003/031102090715.c9clbc6e.html">2.7<br />
  billion, four-year program</a> to build an achievable proto-LUI by 2007. That&#8217;s<br />
  real leadership, as long as the goals are set to be deliverable. C&#8217;mon America,<br />
  let&#8217;s do it!</p>
<p>Let me briefly turn now to from discussing national to personal disappointments.<br />
  We who study science and technology can often see what&#8217;s coming, and yet we<br />
  remain stuck in the Wild Wild West (e.g., today&#8217;s World Wide Web). One of my<br />
  heroes, <b>F.M. Esfandiary</b>(later, FM-2030), wrote a wonderful little book,<br />
  <i><a<br />
href="http://www.amazon.com/exec/obidos/tg/detail/-/0393086119/">Optimism One</a></i>,<br />
  1970, where he described his &quot;deep nostalgia for the future.&quot; One<br />
  of his lesser known works, <i><a<br />
href="http://www.fictionwise.com/ebooks/eBook7527.htm">UpWingers</a></i>, 1973,<br />
  was a brief manifesto for a political outlook neither right wing, nor left wing,<br />
  but &quot;up wing,&quot; one defined by assessing which choices in science and<br />
  technology will accelerate us the most humanely into a better world. I consider<br />
  myself an up winger, and hope to see the spread and maturation of that political<br />
  philosophy in coming years. Yet I see how far we remain from defining ourselves<br />
  in those terms, and that can be discouraging, at times.</p>
<p>Take a look at those sepia-toned photos of San Francisco pioneers in the late<br />
  1800&#8242;s. They were the edge explorers of the day, like my own identity groups,<br />
  the <a<br />
href="http://www.wfs.org/">futurists</a> and <a<br />
href="http://www.transhumanism.org/">transhumanists</a> today. Every once in a<br />
  while you&#8217;ll see one of these individuals look out at you with haunted eyes.<br />
  Perhaps they had read <b>Edward Bellamy&#8217;s</b> hugely-popular futurist work,<br />
  <i><a href="http://reason.com/0008/fe.tp.looking.shtml">Looking Backward: 1887-2000</a></i>.<br />
  Perhaps they were even members of one of the 150 or so Bellamy Clubs of the<br />
  day. The turn of the century was a time of major technological punctuation,<br />
  led by a profusion of new technologies (trains, electricity, internal combustion,<br />
  etc.) in many ways more disruptive and dramatic than any we have seen in this<br />
  generation, even if not faster-paced. No doubt the average futurist in that<br />
  era was tormented by many of the primitivisms of the day. That pioneer of yesteryear<br />
  is you and I, today. The more things change the more some things stay the same.<br />
  In high school, I often talked about posing our Smart family for a group shot,<br />
  with a background of the &quot;coolest&quot; technologies of the day: sports<br />
  car, helicopter, personal computer, industrial robot, bulky cellphone, the works.<br />
  The central gag is that we&#8217;d all be wearing handcuffs, looking out with that<br />
  haunted pioneer&#8217;s expression. The unwritten caption being: &quot;Help! Get me<br />
  the hell out of this primitive age!&quot; I think that picture would age quite<br />
  well over the years. We could take one every ten years, in fact, and I know<br />
  that at least my own expression wouldn&#8217;t change much.</p>
<p>A healthy disappointment<br />
  in the present can be motivating, as long as we keep our perspective. We never<br />
  want to lose our naturalist&#8217;s love and scientist&#8217;s wonder for the amazingly<br />
  beautiful and well-designed world that already exists, for it is only in understanding<br />
  this world that we can help create the next. As Esfandiary observed, we have<br />
  to come to terms with our angst about the primitive aspects of the present,<br />
  and use it for creative purposes.</p>
<p>This said, one<br />
  major personal disappointment that every futurist must eventually face, before<br />
  we die, is how bleak our prospects presently appear for achieving personal immortality<br />
  in the biological domain. Even our best longevity strategies appear to have<br />
  precious little chance of changing this reality. Unfortunately, they are pitted<br />
  against a massively parallel nonlinear system of unimaginable complexity and<br />
  contingency that appears developmentally programmed to start falling apart at<br />
  an accelerating rate after sexual maturity. This is an unpopular position to<br />
  take among some of the more bio-centric transhumanists, but I will go on record<br />
  predicting that in 2020, even as we are witnessing such powerful infotech advances<br />
  as the LUI, most of us will still be losing our short term memory at 50, many<br />
  of us will continue to get Alzheimer&#8217;s at 80, and more than 95 percent of us<br />
  will be right on target for a biological death some time between 70 and 100,<br />
  with a negligible few of us living a decade or two longer, in rapidly declining<br />
  health. Such conditions are endemic to the Wild West, and our primitive science<br />
  seems currently a very long way from being able to make them go away. </p>
<p>Thus, for any<br />
  futurist willing to look beyond the hype to the hard data in the biological<br />
  sciences, we soon discover a major disconnect between what we would like and<br />
  what is physically possible. This disconnect is intrinsic to biology, but it<br />
  does not exist in our increasingly self-organizing information technologies,<br />
  and that, I think, is a major clue to the nature of the future. Attaining a<br />
  measure of cybernetic immortality may arguably even be inevitable for humanity<br />
  in a post-singularity era, as we will discuss shortly.</p>
<p>Any sensitive<br />
  futurist today will tell you that slowing and eventually reversing the rich/poor<br />
  divides is one of the major problems of our generation. Yet even with the tremendous<br />
  scale of this problem, as technology quickens we can at least see the corrective<br />
  path ahead. As the information access divide closes everywhere in the LUI era,<br />
  we can expect the education, then human rights, then public health, and eventually<br />
  even wealth and power divides to inexorably follow suit. But once basic public<br />
  health and medical care are available to all citizens of the planet in the latter<br />
  half of this century, the most fundamental problem with our human biology will<br />
  no longer be the rich/poor medical therapy divide. The fundamental problem will<br />
  be that so few of our medical therapies will have anything but the mildest preventive<br />
  effect against the ravages of aging. Human beings are deeply, inaccessibly developmentally<br />
  programmed to be materially recycled, ironically as we reach the peak of our<br />
  life wisdom. </p>
<p>We can expect<br />
  this unfortunate condition to last at least until the post-singularity A.I.&#8217;s<br />
  development of advanced nanotechnology, which may take many decades itself.<br />
  But by then, as I&#8217;ll argue later, living in the confinement of a biological<br />
  body, even one carefully reengineered for negligible senescence, will no longer<br />
  be the game we want to play. No matter how you stack the scenarios, biological<br />
  longevity of any significant degree doesn&#8217;t seem to play a part in the future<br />
  story of local intelligence.</p>
<p>Fortunately,<br />
  we remain amazingly adaptable, even to our own deaths, which will remain on<br />
  very highly predictable steep-sloped actuarial curves on this side of the singularity,<br />
  regardless of what some transhumanists will tell you. We can always find happiness<br />
  by getting back to basics. We can appreciate the deep natural intelligence and<br />
  informational immortality already encoded in the system, if not the individual.
  </p>
<p>When I encounter<br />
  one of life&#8217;s immovable objects I&#8217;ll try harder up to a point, but when that<br />
  doesn&#8217;t work I&#8217;ve learned the peace of slowing down, cherishing the moment,<br />
  honoring the inner primate, enjoying the quiet self, regrouping and rethinking<br />
  my plans, even as my dreams of personal transformation are necessarily contracted.<br />
  As the mouseketeer <b>Annette Funicello</b><br />
  has said, on <a<br />
href="http://www.calfund.org/8/giving_funicello.php">dealing with multiple sclerosis</a>:<br />
  &quot;I choose not to give up. That would be too easy.&quot; And far less interesting.</p>
<p><b>3. Assuming you die at<br />
  the age of 100, what will be the biggest difference between the world you were<br />
  born into and the world you leave? </b></p>
<p>This is a complex<br />
  question. To my eyes, the world seems to progress by fits and starts, by rapid<br />
  punctuations separated by long droughts of less revolutionary equilibrium states.<br />
  Fortunately, these equilibrium periods seem to get progressively shorter with<br />
  time, because the entire planet&#8217;s technological intelligence is learning in<br />
  an <a href="http://www.singularitywatch.com/#lead">increasingly autonomous fashion</a>,<br />
  at a rate that is at least ten millionfold faster than our own. </p>
<p>So what will<br />
  be the biggest punctuation of my lifetime? From my perspective, we are currently<br />
  chugging through the equilibrium flatlands in the last third of an Information<br />
  Age, one that will likely be seen in hindsight as running for about seventy<br />
  years, from 1950 to 2020. I expect this to be followed by a punctuated transition<br />
  to a shorter Symbiotic Age, running perhaps thirty years, from 2020-2050. I<br />
  see these equilibrium eras as part of an accelerating spiral of punctuated evolutionary<br />
  development, and I consider several of the general, statistically predictable<br />
  developmental features of this acceleration to be tuned in to the special parameters<br />
  of the universe we inhabit. Consider skimming my web page on the <a<br />
href="http://www.singularitywatch.com/spiral.html">Developmental Spiral</a> if<br />
  you&#8217;d like to explore this spiral of accelerating emergences a bit further.
  </p>
<p>To answer your<br />
  question then, I think the transition to symbiotic computing systems, the decade<br />
  or two surrounding our entry to the LUI era, will be the biggest difference<br />
  I&#8217;ll see. The Symbiotic Age will be a time when almost all of us will consider<br />
  computers as actually useful (many today don&#8217;t), and when the vast majority<br />
  of us begin to feel naked outside the network. When we all have what futurist<br />
  <b>Alex Lightman</b> calls &quot;wireless everywear&quot; access to our talking computer interface,<br />
  and when computers start to do very useful, high level things in our lives.</p>
<p>By the end of<br />
  this age, for that vast majority of us who choose to participate in digital<br />
  ecologies, a mature LUI will be interfaced with personal computers that are<br />
  capturing our entire lives digitally (Lifecams), that help us stay proficient<br />
  in a small number of carefully chosen skills (Knowledge Management) and that,<br />
  by remembering everything we have ever said, begin to extensively model not<br />
  only our preferences, but our personalities as well. <a<br />
href="http://mysite.verizon.net/william.bainbridge/dl/capture.htm">Personality<br />
  Capture</a>, a first generation form of <a<br />
href="http://www.aleph.se/Trans/Global/Uploading/">uploading</a>, is one of the<br />
  most important aspects of the post-2020 world, and one of the least reported<br />
  and understood, at present. Read <b>William Sims Bainbridge</b><br />
  for more on this gargantuan developmental attractor.</p>
<p>At that point, our computers will become our best friends, our fraternal twins,<br />
  and human beings will be intimately connected to each other and to their machines<br />
  in ways few futurists have fully grasped to date. Read Ray Kurzweil&#8217;s <i><a<br />
href="http://www.amazon.com/exec/obidos/tg/detail/-/0140282025/">The Age of Spiritual<br />
  Machines</a></i>, 1999 for one excellent set of longer term scenarios. Read<br />
  B.J. Fogg&#8217;s <i><a<br />
href="http://www.amazon.com/exec/obidos/tg/detail/-/1558606432">Persuasive Technology</a></i>,<br />
  2002 for some nearer term ones. Today&#8217;s early modeling systems, like <a<br />
href="http://face-and-emotion.com/dataface/facs/new_version.jsp">FACS</a> for<br />
  reading human facial emotion, will be improved and integrated into your personalized<br />
  LUI, which will monitor both internal and external biometrics to improve our<br />
  health, outlook, and performance.Â  </p>
<p>We&#8217;ll communicate<br />
  intelligently with all our tools, giving constant verbal feedback to their designers.<br />
  We&#8217;ll spend most of our waking lives exploring a simulation space (simspace)<br />
  that is so rich, educational, entertaining, and productive, that we will call<br />
  today&#8217;s mostly non-virtual world &quot;slowspace&quot; by comparison, a place<br />
  many of us will drop back into only when we aren&#8217;t working, learning, and exploring.<br />
  Slowspace will remain sacred, and close to our hearts, but it will begin to<br />
  become secondary and functionally remote, like the home of our youth.</p>
<p>Circa 2050, in<br />
  my current estimation, we might see another punctuation to an Autonomy Age,<br />
  when large scale, biologically-inspired computing systems begin to exhibit higher<br />
  level human intelligence. Many of our technologies will at that time be able<br />
  to autonomously improve themselves for extended periods of time. During this<br />
  era, machine intelligence, even in our research labs, will continue to blunder<br />
  into dead ends everywhere, the cul-de-sacs that are the typical result of chaotic<br />
  evolutionary searches. But these systems will very quickly be able to reset<br />
  themselves, with little human assistance, to try a new evolutionary developmental<br />
  approach. I wouldn&#8217;t expect that period to last very long. Perhaps a decade<br />
  or so later, from our perspective, equilibria in terms of technological intelligence<br />
  will disappear altogether. </p>
<p>We will then<br />
  have arrived at the <a href="http://www.singularitywatch.com/#what">technological<br />
  singularity</a>, a phase change, a place where the technology stream flows so<br />
  fast that new global rules emerge to describe the system&#8217;s relation to the slower-moving<br />
  elements in its vicinity, including our biological selves. That doesn&#8217;t mean<br />
  we won&#8217;t be able to understand the general rules that emerge. On the contrary,<br />
  most of these may be obvious to us, even now. But it means that many of the<br />
  particular states occurring within those rules will become impenetrable to pre-singularity<br />
  minds. </p>
<p>A human-surpassing<br />
  general artificial intelligence will be a physical system, and if it is physical,<br />
  much of its architecture must be simple, repetitive, and highly understandable<br />
  even by biological minds. Consider, for example, just how much we know about<br />
  the neural architecture that creates our own consciousness, without being able<br />
  to predict consciousness emergence, or to comprehend its nature from first principles.<br />
  So it must be with the A.I.&#8217;s to comeâ€”while much of their structure will be<br />
  tractable and tangible to us in a reductionist sense, much of their holistic<br />
  intelligence will become impenetrable to our biological minds.</p>
<p>This impenetrability<br />
  is nothing mystical, we already see it in the way the emergent features of any<br />
  complex technology such as a supercomputer, automated refinery, robotic factory,<br />
  or supply chain management system are already poorly comprehended by all but<br />
  those few of us involved its analysis or design. The difference will be that<br />
  the emergent intelligence of virtually all planetary technology will begin to<br />
  display this inscrutability, not just to average users, but even to the experts<br />
  involved in its creation. </p>
<p>Consider for a moment the following presently unprovable assertion: If ethics<br />
  are a necessary emergence from computational complexity, then I contend that<br />
  these systems will be ethically compelled to minimize the disruption we feel<br />
  in the transition.Â  As a result, most of the self improvement of self-aware<br />
  A.I.s will occur on the other side of an event horizon, beyond which biological<br />
  organisms cannot directly perceive, only speculate. Yet at the same time, our<br />
  technologies will continue to gently become ever more seamlessly integrated<br />
  with our biological bodies, so that when we say we don&#8217;t understand aspects<br />
  of the emergent intelligence, it will increasingly be like saying we don&#8217;t understand<br />
  emergent aspects of ourselves. But unlike our biological inscrutabilities, the<br />
  technological portions of ourselves that we don&#8217;t understand will be headed<br />
  very rapidly toward new levels of comprehension of universal complexity, playing<br />
  in fields forever inaccessible to our slow-switching biological brains. </p>
<p>My current estimate<br />
  for that transition would be around 2060, but that is a guess. We need funded<br />
  research to be able to achieve better insight, something that hasn&#8217;t yet happened<br />
  in the singularity studies field. The generation being born today will likely<br />
  find that a very interesting time. At the same time, as I have said, I expect<br />
  it they won&#8217;t consider it to be a perceptually disruptive time, at least any<br />
  more than prior punctuations. A time of massive transformation, but very likely<br />
  significantly less stressful than prior punctuations, given the way computational<br />
  complexity creates its own increasingly fine-grained stability, if one looks<br />
  closely at the universal developmental record.</p>
<p>Looking at universal<br />
  history, every singularity seems to be built on a chain of prior singularities.<br />
  Considering the chain that has led to human emergence, each appears to have<br />
  rigorously preserved the local acceleration of computational complexity. The<br />
  tech singularity certainly has a lot of significance to human beings, as after<br />
  that date our own biology becomes a second-rate computational system in this<br />
  local environment. This emergence, obvious to many high school students today,<br />
  still irritates, angers, and frightens many scholars, who have attempted to<br />
  dismiss it by calling it &quot;techno-transcendentalism,&quot; &quot;cybernetic<br />
  totalism,&quot; &quot;hatred of the flesh,&quot; &quot;religious belief,&quot;<br />
  &quot;millennialism,&quot; or any number of other conveniently thought-stopping<br />
  labels.</p>
<p>But from a universal<br />
  perspective, the coming technological singularity looks like just another link<br />
  in a very fast, steep climb up a nearly vertical slope on the way to an even<br />
  more interesting destination. My best present guess for that destination is<br />
  the <a href="http://www.singularitywatch.com/specu.html">developmental singularity</a>,<br />
  a computational system that rapidly outgrows this universe and transitions to<br />
  another domain. Fortunately, there are many practical insights we can gain today<br />
  from developmental models, as they testably predict the necessary direction<br />
  of our complex systems. Our own organization, the <a<br />
href="http://www.accelerating.org/">Institute for Accelerating Change</a>, hopes<br />
  to see more funding and institutional interest in these topics in coming decades.</p>
<p>But getting back<br />
  to my own mortality, even with the best human-guided medical and preventive<br />
  care that money can buy, I&#8217;m not at all sure I&#8217;ll live to 100, unlike many of<br />
  my more sanguine transhumanist friends. Human bodies are deeply developmentally<br />
  designed to have our construction materials recycled, as best we can tell. I<br />
  predict our planet will see only a very mild increase in supercentenarians in<br />
  the next fifty years, regardless of all the wonderful schemes of &quot;negligible<br />
  senescence&quot; by passionate researchers like <b>Aubrey<br />
  De Grey.</b> Only infotech, not biotech, is on an accelerating developmental growth curve,<br />
  apparently for deep universal reasons. </p>
<p>What I have just<br />
  said goes against the dominant dogma, promoted by indiscriminately optimistic<br />
  futurists and a complicit biotech industry, both of which are strongly motivated<br />
  to believe that we will see a powerful &quot;secondary acceleration&quot; in<br />
  biotech, carried along by our primary acceleration in infotech. But while we<br />
  will see a very dramatic acceleration in biotech <i>knowledge</i>, I humbly<br />
  suggest that our existing knowledge of biological development already tells<br />
  us that we will be able to use this information to make only very mild changes<br />
  in biological capabilities and capacities, almost exclusively only changes that<br />
  &quot;restore to the mean&quot; those who have lost their ability to function<br />
  at the level of the average human being. </p>
<p>As I explain<br />
  in <a<br />
href="http://www.singularitywatch.com/biotech.html">Understanding the Limitations<br />
  of Twenty-First Century Biotechnology</a>, there are a number of very fundamental<br />
  reasons why biotech, aided by infotech, cannot create accelerating gains within<br />
  biological environments. Yes, with some very clever and humane commercializations<br />
  of caloric restriction and a handful of other therapies we might see twenty<br />
  times more people living past 100 than we see today, people with fortuitous<br />
  genes who scrupulously follow good habits of nutrition and exercise. That is<br />
  a noble and worthwhile goal. But we must also remember that virtually no one<br />
  lives beyond 100 today, so a 20X increase is still only very mild in global<br />
  computational and humanitarian effect. This will add to our planetary wisdom,<br />
  and is something to strive toward, but this is not a disruptive change, for<br />
  deep reasons to do with the limitations of the biological substrate. </p>
<p>Furthermore, genetic engineering, as I discuss in the link above, cannot create<br />
  accelerating changes using top-down processes in terminally differentiated organisms<br />
  like us. This intervention would have only mild effects even if it could get<br />
  beyond our social immune systems to the application stage, which in most cases<br />
  it thankfully cannot. Perhaps the most disruptive biotech change we can reliably<br />
  expect, a cheap and effective memory drug that allows us temporary, caffeine-like<br />
  spikes in our learning ability, followed by inevitable &quot;stupid periods&quot;<br />
  where we must recover from the simplistic chemical perturbation, would certainly<br />
  also improve the average wisdom of human society. But even this amazing advance<br />
  would not even double our planetary biological processing capacity, something<br />
  that happens in information technologies every 18-24 months.Â  </p>
<p>In summary, many<br />
  decades before the tech singularity arrives I expect to either be chemically<br />
  recycled (most likely), or to be in some kind of suspended animation. Cryonic<br />
  suspension, for all its life-affirming intent, will likely stay entirely marginalized<br />
  in the first world prior to the singularity for a number of reasons, both psychosocial<br />
  and technological. At present, I&#8217;d consider it for myself only if a number of<br />
  presently unlikely conditions transpire: 1) neuroscience comes up with a model<br />
  that tells us what elements of the brain need to be protected to preserve personality,<br />
  2) cryonics researchers can either prevent or show the irrelevance of the extensive<br />
  damage that presently occurs during freezing, 3) most of my friends are doing<br />
  it (they are currently not), and 4) I expect to be revived by intelligent machines<br />
  not in some far future, but very soon after I die, while many of my biological<br />
  friends are still alive. </p>
<p>The second and<br />
  the fourth conditions deserve some expansion. As to the second condition, we<br />
  do not yet know to what extent the brain&#8217;s complexity is dependent on the intricate<br />
  three dimensional structure in which it emerges. That structure, today, is grossly<br />
  deformed and degraded in the freezing process, which currently leads both to<br />
  destruction (via stochastic fusion) of at least some neural ultrastructure,<br />
  and to intense cellular compression (and erasure of at least some membrane structure,<br />
  again by fusion) as ice forms in the extracellular neural interstices. Will<br />
  we come up with new preservation protocols? We can always hope.</p>
<p>The reason the<br />
  fourth condition of rapid reanimation is important to me is because I know in<br />
  my heart that once I woke up from any A.I.-guided reanimation procedure, in<br />
  order to usefully integrate into a post-singularity society I would soon choose<br />
  to change myself so utterly and extensively that it would be as if I never existed<br />
  in biological form. My lifecam traces could be uploaded and the cybernetic &quot;me&quot;<br />
  that emerged would not be valuably different. So what would be the point? I<br />
  think we are nearly ready to move beyond the fiction of our own biological uniqueness<br />
  having some long term relevance to the universal story. I expect our future<br />
  information theory will inform us of the suboptimality of personal biological<br />
  immortality. For those who say &quot;screw suboptimality,&quot; I suggest that<br />
  we&#8217;ll eventually be educated out of that way of thinking as surely as our ancestors<br />
  outgrew other forms of mental slavery. For me, the essence of individual life<br />
  is to use one&#8217;s complexity in the matrix in which it was born. Attempts to transmit<br />
  it more than a short distance away from that environment are bound to be exercises<br />
  in frustration, missing one of the basic motives of life, to do great things<br />
  with your contemporaries. Ask any Fourth World adult who is suddenly transplanted<br />
  to New York City and he&#8217;ll tell you the same.</p>
<p><b>4. What future development<br />
  that you consider most likely (or inevitable) do you look forward to with the<br />
  most anticipation? </b></p>
<p>I look forward<br />
  greatly to the elimination of the grosser forms of coercion, dehumanization,<br />
  violence and death that occur today. </p>
<p>Admittedly, these<br />
  seem to be processes that will always be with us at some fundamental level.<br />
  Computational resources will very likely remain competitive battlegrounds in<br />
  the post singularity era, because we inhabit a universe of finite-state computational<br />
  machines pitted against all the remaining unsolved problems, in a GÃ¶delian-incomplete<br />
  universe. And bad algorithms will surely die in that environment, far more swiftly<br />
  than less fit organisms or ideas die today. </p>
<p>But when a bad<br />
  idea dies in our own minds, we see that as a lot less subjectively violent than<br />
  our own biological deaths. Over time, love, resiliency, and consciousness win.<br />
  As <b>Ken<br />
  Wilber</b> (<i><a<br />
href="http://www.amazon.com/exec/obidos/tg/detail/-/1570627401/">A Brief History<br />
  of Everything</a></i>, 2001) might say, the integrated self learns a privileged<br />
  perspective from which death is no longer troubling. Death becomes regulated<br />
  in a fine-grained manner, it loses its sting, it is subsumed, becoming simply<br />
  growth. But it takes a lot of luck and learning for us to get to that place.</p>
<p>In many ways,<br />
  I think the collective consciousness of our species has come to understand that<br />
  we have already achieved a very powerful degree of informational immortality.<br />
  By and large, our evolutionary morality guides us very strongly to act and think<br />
  in that fashion. I look forward to the individual consciousnesses of all species<br />
  on this planet gaining that victory in coming decades. Including the coming<br />
  cybernetic species we are helping to create.</p>
<p>Sci-tech systems<br />
  are not alien or artificial in any meaningful sense. As <b>John McHale</b> said (<i><a<br />
href="http://www.amazon.com/exec/obidos/tg/detail/-/080760495X/">The Future of<br />
  the Future</a></i>, 1969), technology is as natural as a snail&#8217;s shell, a spider&#8217;s<br />
  web, a dandelion&#8217;s seedâ€”many of us just don&#8217;t see this yet. Digital ecologies<br />
  are the next natural ecology developing on this planet, and technology is a<br />
  substrate that has shown, with each new generation, that it can live with vastly<br />
  less matter, energy, space, and time (what I call MEST compression) than we<br />
  biological systems require for any fixed computation. Wetware simply cannot<br />
  perform that feat. Technology is the next organic extension of ourselves, growing<br />
  with a speed, efficiency, and resiliency that must eventually make our DNA-based<br />
  technology obsolete, even as it preserves and extends all that we value most<br />
  in ourselves. </p>
<p>I can&#8217;t stress<br />
  enough the incredible efficiencies that emerge in the miniaturization of physical-computational<br />
  systems. If MEST compression trends continue as they have over the last six<br />
  billion years, I propose that tomorrow&#8217;s A.I. will soon be able to decipher<br />
  substantially all of the remaining complexities of the physical, chemical, and<br />
  biological lineage that created it, our own biological and conscious intricacies<br />
  included, and do all this with nano and quantum technologies that we find to<br />
  be impossibly, &quot;magically&quot; efficient. In the same way that the entire<br />
  arc of human civilization in the petrochemical era has been built on the remains<br />
  of a small fraction of the decomposing biomass that preceded us, the self-aware<br />
  technologies to come will build their universe models on the detritus of our<br />
  own twenty first century civilization, perhaps even on the trash thrown away<br />
  by one American family. That&#8217;s how surprisingly powerful the MEST compression<br />
  of computation apparently is in our universe. It continually takes us by surprise.
  </p>
<p>I am optimistic<br />
  that these still poorly characterized physical trends will continue to promote<br />
  accelerating intelligence, interdependence, and immunity in our informational<br />
  systems, and look forward to future work on understanding this acceleration<br />
  with great anticipation.</p>
<p><b>5. What future development<br />
  that you consider likely (or inevitable) do you dread the most? </b></p>
<p>I worry that<br />
  we will not develop enough insight to overcome our fear of the technological<br />
  future, both as individuals and as a nation. To paraphrase <b>Franklin Roosevelt</b>, speaking at the depths of our Great<br />
  Depression, the only thing we have to fear is fear itself. </p>
<p>Many in our society<br />
  have entered another Great Depression recently. This one is existential, not<br />
  economic. A century of increasingly more profound process automation and computational<br />
  exponentiation has helped us realize that humanity is about to be entirely outpaced<br />
  by our technological systems. We are fostering a substrate that learns multi-millionfold<br />
  faster than us, one that will soon capture and exceed all that we are. Again,<br />
  Roosevelt&#8217;s credo is applicable. If we ignore it we will end up being dragged<br />
  by the universe into the singularity, mostly unconsciously, kicking and screaming<br />
  and fighting each other, rather than walking upright, picking our own path.
  </p>
<p>I&#8217;m concerned<br />
  that we will decide later, rather than earlier, to learn deeply about the developmental<br />
  processes involved. That we will rely on our own ridiculously incomplete egos<br />
  and partial, mostly top-down models to chart the course, rather than come to<br />
  understand the mostly bottom-up processes that are accelerating all around us.<br />
  I&#8217;m concerned we won&#8217;t realize that humans are like termites, building this<br />
  massive mound of technological infrastructure that is already vastly more complex<br />
  than any one human understands, and unreasonably stable, self-improving, self-correcting,<br />
  self-provisioning, energy and resource minimizing, and so on. Soon a special<br />
  subset of these systems will be self-aware, and the caterpillar will turn into<br />
  a butterfly, freeing the human spirit. Gaining such knowledge about the developmental<br />
  structure of the system would surely allow us to chart a better evolutionary<br />
  course on the way.</p>
<p>Through a special<br />
  combination of geography, historical circumstance, intention, and luck, the<br />
  United States has inherited the position of World Leader of our Wonderfully<br />
  Multicultural Planet. With our hard-won history of individual rights, our historically<br />
  productivity-based culture, our generous immigration policies, our pluralism,<br />
  well-developed legal immune systems, social tolerance, and other advantages<br />
  we hold this position still, for now. We may rise to recognize the vision-setting<br />
  responsibility that comes with holding this position. Or we may continue to<br />
  subconsciously fear technology, as we have intermittently over the last century<br />
  (technology, rather than human choice, has been mistakenly blamed for the World<br />
  Wars, the Great Depression, the Cold War, Vietnam, Rich/Poor Divides, Global<br />
  Pollution, Urban Decay, you name it). Alternatively, we may decide that the<br />
  wise use of science and technology must be central to our productivity, educational<br />
  systems, government and judicial systems, media, and culture, the way they so<br />
  obviously were when we were a new nation. Fortunately, there are signs that<br />
  other countries, such as China, Japan, South Korea, Thailand, Singapore, are<br />
  actively choosing the latter road. </p>
<p>Several of these<br />
  countries, most notably Singapore and China, continue to operate with glaring<br />
  deficits in the political domain. Yet they are experiencing robust growth due<br />
  to enlightened programs of technological and economic development. Nevertheless,<br />
  none of these countries are yet successfully multicultural enough, or have sufficiently<br />
  well developed political immune systems (institutionalized pluralism, pervasive<br />
  tort law, independent media, mature insurance systems, tolerant social norms)<br />
  to qualify as leaders of the free world, at the present time. It is telling<br />
  that the owners of today&#8217;s rapidly-growing Chinese manufacturing enterprises<br />
  find it most desirable to keep their second homes in the United States, due<br />
  to our special combination of both unique social advances and technological<br />
  development. Much of the world&#8217;s capital still flows first to the U.S., to seek<br />
  the highest potential return. But for how long can this continue if we remain<br />
  lackluster in our technological leadership, riding on our prior political and<br />
  economic advances? </p>
<p>It is important<br />
  to note that being defenders of the free world is certainly one critical technological<br />
  role which we have unilaterally inherited since the end of the Cold War. Furthermore,<br />
  it is a role to which I would argue that we are aggressively and mostly intelligently<br />
  applying ourselves. Yet while this is critical, it is not enough to secure our<br />
  leadership position. We must lead with proactive social reform in mind, not<br />
  simply security, or we remain guilty of resting on our accomplishments. In a<br />
  world where autocratic Empires are turning into democratic Republics, we must<br />
  lead the move to an increasingly participatory, democratic, and empowering nation<br />
  state. The world remembers and emulates the security of Sparta, but almost everything<br />
  else falls in Athenian territory. We need to find the high ground of both of<br />
  these legacies, and integrate them into our plans for the coming generation.
  </p>
<p>As long as we<br />
  define ourselves by our fear of transformational technologies, and our dread<br />
  of being exceeded by the future, we will continue in ignorance and self-absorption,<br />
  rather than wake up to our purpose to understand the universe, and to shape<br />
  it in accord with the confluence of our desires and permissible physical law.</p>
<p>For over a century<br />
  we&#8217;ve seen successive waves of increasingly more powerful technologies empower<br />
  society in ever more fundamental ways. Today&#8217;s computers are doubling in complexity<br />
  every 12-18 months, creating a price-performance deflation unlike any previous<br />
  period on Earth. Yet we continue to ignore what is happening, continue to be<br />
  too much a culture of celebrity and triviality, continue to make silly extrapolations<br />
  of linear growth, and bicker over concerns that will soon be made irrelevant,<br />
  continue to engage in activities that delay, rather than accelerate the obvious<br />
  developmental technological transformations ahead.</p>
<p>I am also concerned<br />
  that we may continue to soil our own nests on the way to the singularity, continue<br />
  to take shortcuts, assuming that the future will bail us out, forgetting that<br />
  the journey, far more than the destination, is the reward. Consider that once<br />
  we arrive at the singularity it seems highly likely that the A.I.s will be just<br />
  as much on a spiritual quest, just as concerned with living good lives and figuring<br />
  out the unknown, just as angst-ridden as we are today. </p>
<p>No destination<br />
  is ever worth the cost of our present dignity and desire to live balanced and<br />
  ethical lives, as defined by today&#8217;s situational ethics, not by tomorrow&#8217;s idealizations.<br />
  If I can&#8217;t convince the Italian villager of 2120 of the value of uploading,<br />
  then he will not willingly join me in cyberspace until his entire village has<br />
  been successfully recreated there, along with much, much more he has not yet<br />
  seen. I applaud his Luddite reluctance, his &quot;show me&quot; pragmatism,<br />
  for only that will challenge the technology developers to create a truly humanizing<br />
  transition.</p>
<p>Finally, I&#8217;m<br />
  concerned that we may not put enough intellectual and moral effort into developing<br />
  immune systems against the natural catastrophes that occur all around us. Catastrophes<br />
  are to be expected, and they accelerate change whenever immune systems learn<br />
  from them. In my own research, there has never been a catastrophe in known universal<br />
  history (supernova, KT-meteorite, plague, civilization collapse, nuclear detonation,<br />
  reactor meltdown, computer virus, 9/11, you name it) that did not function to<br />
  accelerate the average distributed complexity (ADC) of the computational network<br />
  in which it was embedded. It is apparently this immune learning that keeps the<br />
  universe on a smooth curve of continually accelerating change. If there&#8217;s one<br />
  rule that anyone who studies accelerating change in complex adaptive systems<br />
  should realize, it is that immunity, interdependence, and intelligence always<br />
  win. This is not necessarily so for the individual, who charts his or her own<br />
  unique path to the future but is often breathtakingly wrong. But the observation<br />
  holds consistently for the entire amorphous network.</p>
<p>Nevertheless,<br />
  there have been many cases of catastrophes where lessons were not rapidly learned,<br />
  where immune systems were not optimally educated to improve resiliency, redundancy,<br />
  and variation. And in the case of human society, our sociotechnological immune<br />
  systems work best when they are aided by committed human beings, the most conscious<br />
  and purposeful nodes in our emerging global brain. Consider our public health<br />
  efforts against pathogens such as SARS and AIDS, and the strategies for success<br />
  become clear. Anything that economically improves social, political, technological,<br />
  and biological immune systems is a very forsighted development.</p>
<p>This said, one of our great challenges in coming decades is to design a global<br />
  technological and cultural immune system, a ubiquitous EarthGrid of sensing<br />
  and intelligence systems, a <i><a<br />
href="http://www.amazon.com/exec/obidos/tg/detail/-/0738201448/">Transparent Society</a></i><br />
  (<b>David Brin</b>, 1998) that has enough pluralism and fine-grained accountability<br />
  to scrupulously ensure individual liberties while also providing unparalleled<br />
  collective security. We have almost arrived at the era of SIMADs (Single Individuals<br />
  engaged in Massive Asymmetric Destruction), a term coined by the futurist <b>Jerry<br />
  Glenn</b> of the <a href="http://www.acunu.org/">Millennium Project</a>. It<br />
  is time for us to create immune systems that are capable, statistically speaking,<br />
  of ensuring continued acceleration in the average distributed complexity of<br />
  human civilization. EarthGrid appears inevitable when accelerating technological<br />
  change occurs on a planet of &quot;finite sphericity,&quot; as <b>Teilhard De<br />
  Chardin</b> would say. Knowing that can help us boldly walk the path. </p>
<p>Every sniper and serial killer should be countered today with the installation<br />
  of another set of public cameras. By their very actions they are building the<br />
  social cages that will eventually catch them, and all others like them, so we<br />
  might as well publicly acknowledge this state of affairs, for maximum behavioral<br />
  effect. Ideally, ninety five percent of these cameras will remain in private,<br />
  not public hands, as is the current situation in Manhattan. When will we see<br />
  RFID in all our products? When will we finally live in a world were every citizen<br />
  transmits an electronic signal uniquely identifying them to the network at all<br />
  times? When will we have a countervailing electronic democracy, ensuring this<br />
  power is used only in the most citizen-beneficial manner?Â  Today we see early<br />
  efforts in these areas, but as I&#8217;ve written in previous articles, there is still<br />
  far too much short term fear and lack of foresight.</p>
<p>If we think carefully<br />
  about all this, we will realize that a broadband LUI network must be central<br />
  to the creation of tomorrow&#8217;s n</p>
]]></content:encoded>
			<wfw:commentRss>https://blog.speculist.com/speaking_of_the_future/riding-the-spir-1.html/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
