Riding the Spiral, Part 2

By | December 4, 2003

This is the second half of our groundbreaking interview with John Smart of the Institute for Accelerating Change.(Read Part One)

The Developmental Singularity clear=all style='page-break-before:always'>

I’m familiar with the idea of a singularity from reading about black holes.Â
As I understand it, the event horizon of a black hole is the point beyond which
no light can escape. Perceived time slows to an absolute standstill at the
event horizon. At the singularity, gravity becomes infinite, and what we normally
think of as the "laws of nature" cease to function the way we expect
them to. The singularity seems to be the ultimate physical enigma. What then
is this technological singularity, and in what way is it analogous to the singularity
of a black hole?

This last question
may be the most important of our time, with regard to understanding the future
of universal intelligence. Or it may be a greased pig chase. Only posterity
can decide.

I’ve been chipping away at the topic since the seventh grade in high school,
when I had a series of early and very elegant intuitions in regard to accelerating
change, speculations that I’d love to see seriously researched and critiqued
in coming years. In 1999 I started a website on the subject, SingularityWatch.com.
In 2001 did an extended
interview
for Sander Olson at Nanomagazine.com, and in 2003 I and
a few other colleagues formed a nonprofit, the Institute
for Accelerating Change
(Accelerating.org), to further inquiry in this area.
The most important thing we’ve done to date is a very well-received conference
at Stanford, href="http://www.accelerating.org/acc2003/conf_home.htm">Accelerating Change 2003.Â
Finally, I’m presently writing a book, Destiny of Species, on the topic
of accelerating change, but please don’t ask me how it’s progressing, or it
will reliably put me in a bad mood.

To begin unpacking
this question, it helps to realize that there is a menagerie
of singularities
in various literatures that we could study, with gravitational
singularities being just the most well-known type. Some generalizations can
be made, possible clues to a useful definition. Every one of these processes
engages a special set of locally accelerating dynamics that transition to some
irreversible systemic change, involving emergent features which are, at least
in part, intrinsically unpredictable from the perspective of the pre-singularity
system.

But before we
go further, I shall lay my biases on the table. I am a systems theorist. The
systems theorist’s working hypothesis—and fundamental conceit—is that analogical
thinking is more powerful and broadly valuable than analytical thinking in almost
all cases of human inquiry. This doesn’t excuse us from bad analogies, which
are legion, and it doesn’t make quantitative analysis wrong, it just places
math and logic in their proper place as powerful tools of inquiry used by weakly
digital minds. Today’s quantitative and logical tools are enabled by the underlying
physics of the universe, which are much more sublime, and such tools often have
no relation to real physical processes, which may use quanta and dimensionalities
entirely inaccessible to our current symbolisms.

Furthermore, I
take the "infopomorphic" (as compared to "anthropomorphic")
view, that all physical systems in the universe, including us precious bipeds
and even the universe itself, are engaged in computation, in service to some
grander purpose of self- and other-discovery. This philosophy has also been
described as "digital physics," and one of several variants can be
found at Ed Fredkin’s Digital
Philosophy
website. It has also been elegantly introduced by John Archibald
Wheeler’s
"It from Bit," 1989 (see href="http://www.amazon.com/exec/obidos/tg/detail/-/0521568374/">Physical Origins
of Time Asymmetry
, 1996).

Finally, I am
an evolutionary developmentalist, one who believes that all important systems
in the world, parsimoniously including the universe itself, must both evolve
unpredictably and develop predictably. That makes understanding the difference
between evolution and development one of the most important programs of inquiry.
The meta-Darwinian paradigm of evolutionary development, well described by such
innovative biologists as Rudolf Raff (see href="http://www.amazon.com/exec/obidos/tg/detail/-/0226702669/">The Shape of
Life
, 1996), Simon Conway Morris, Wallace Arthur, Stan Salthe,
William Dembski, and Jack Cohen, is one that situates orthodox
neo-Darwinism as a chaotic mechanism that occurs within (or in some versions,
in symbiosis with) a much larger set of statistically deterministic, purposeful
developmental cycles. There are now a number of scientists applying this view
to both living and physical systems, including those exploring such topics as
self-organization, convergence, hierarchical acceleration, anthropic cosmology,
Intelligent Design, and a number of other subjects that are very poorly explained
by the classical Darwinian theory championed by Stephen Jay Gould and
Richard Dawkins.

Systems theorists
require some perspective to play their analogy games, so please indulge me as
we engage briefly and coarsely in big picture history in order to discuss the
singularity phenomenon. During the seventeenth century, with Isaac Newton’s
Principia (1687), it seems fair to say that humanity awakened to the
realization that we live in a fully physical universe. During the early twentieth
century, with Kurt Gödel’s Incompleteness Theorem (1931) and the Church-Turing
Thesis (1936) we came to suspect that we also live in a fully computational
universe, and that within each discrete physical system there are intrinsic
limits to the kinds of computation (observation, encoding) that can be done
to the larger environment. Presumably, the persistence of these limits, and
their interaction with the remaining inaccessible elements of reality, spurs
the development of new, more computationally versatile systems, via increasingly
more rapid hierarchical "substrate" emergences over time. At each
new emergence point a singularity is created, a new physical-computational system
suddenly and disruptively arises, a phase change of some definable type occurs.
At this point, a new local environment, or "phase space" is created
wherein very different local rules and conditions apply. That’s one predominant
systems model for singularities, at any rate.

From this physical-computational
perspective, replicating suns, spewing their supernovas across galactic space,
can be seen as rather simple physical-computational systems that, over billennia,
nevertheless encode a "record" of their exploration of physical reality,
their computational "phase space." This record appears to us in the
form of the periodic table. Once that elemental matrix becomes complex enough,
and carbon, nitrogen, phosphorous, sulfur, and friends have emerged, we notice
a new singularity occur in specialized local environments, wherein the newest
computational game becomes replicating organic molecules, chasing their own
tails in protometabolic cycles (see Stuart Kauffman, href="http://www.amazon.com/exec/obidos/tg/detail/-/0195111303/">At Home in the
Universe,
1996).

Again, these systems
developmentally encode their evolutionary exploration by constructing a range
of complex polymerizing systems, including autocatalytic sets. Once a particular
set becomes complex enough, we again see another phase change singularity, with
the first DNA-guided protein synthesis emerging on the geological Earth-catalyst,
even before its crust has begun to cool. As precursors to fats, proteins, and
nucleic acids have all been found in our interplanetary comet chemistry, and
as we suspect that chemistry to be common throughout our galaxy, it is becoming
increasingly plausible that every one of the billions of planets (in this galaxy
alone) that are capable of supporting liquid water for billions of years may
be primed for our special type of biogenesis. This proposed transition, a singularity
in an era of accelerating molecular evolutionary development, is what A.G.
Cairns-Smith
calls "genetic takeover," an evocative phrase. Such
unicellular emergence very likely leads in turn to multicellularity, then to
differentiated multicelluar systems encoding useful neural arborization patterns,
another singularity (570 million years ago), which leads to big-brained mammals
encoding mimicry memetics (100 million years ago) and to hominids encoding and
processing oral linguistic memetics (10-5 million years ago), then to the first
extrabiological technology (soft-skinned Homo habilis collectively throwing
rocks at more physically powerful leopard predators, 2 million years ago), then
to today’s semi-autonomous digital technological systems, encoding their own
increasingly successful algorithms and world-models. (Forgive me if we skipped
a few steps in this illustration).

Systems thinkers, since at least Henry Adams in 1909, have noted that
each successive emergence is vastly shorter in time than the one that preceded
it. Some type of global universal acceleration seems to be part and parcel to
the singularity generation process. Note also that each of the computational
systems that generates a singularity is incapable of appreciating many of the
complexities of the progeny system. A sun has little computational capacity
to "understand" the organic chemistry it engenders, even as it creates
and interacts intimately with that chemistry. A bacterium does not deeply comprehend
the multicellular organisms which spring from its symbiont colonies, even as
it adapts to life on those organisms, and thus learns at least something reliable
about their nature. Humanity, in turn, can have little understanding of the
subtle mind-states of the A.I.s to come, even as we become endosymbiotically
captured by and learn to function within that system, in the same way bacteria
(our modern mitochondria) were captured by the eukaryotic cell.

Yet at the same
time, the more complex any system becomes, the better it models the universe
that engendered it, and the better it understands its own history, the physical
chain of singularities that created it. That also implies, if you consider the
recursive, self-similar nature of the singularity generation process, the better
it understands its own developmental future as well. If our entire universe
is evolutionary developmental, which is an elegantly simple possibility, then
it is constrained to head in some particular direction, a trajectory that we
are beginning to see clearly even today.

For a very incomplete
outline of this trajectory, we can propose that the universe must invariably
increase in average general entropy (in practice, if not in theory), with islands
of locally accelerating order, that each hierarchical system must emerge from
and operate within an increasingly localized spacetime domain, and that the
network intelligence of the most complex local systems must always accelerate
over time. The simplicity of such macroscopic, developmental rules and of developmental
convergence in general, by comparison to the unpredictable complexity of the
microscopic, evolutionary features of any complex system, is what allows even
twenty-first century humans to see many elements of the framework of the future,
even if the evolutionary details must always remain obscure.

This surprising
concept, the "unreasonable effectiveness" of simple mathematics, analogies,
and basic rules and laws for explaining the stable features of otherwise very
complex universal systems has been called Wigner’s
Ladder
, after Eugene Wigner’s famous href="http://www.dartmouth.edu/~matc/MathDrama/reading/Wigner.html">1960 paper
on this topic. As I will explore later, a developmentalist like myself begins
his inquiry by suspecting that the universe has self-organized, over
many successive cycles, to create its presently stunning set of hierarchical
complexities, in the same manner as my own complexity has self-organized, over
five billion years of genetic cycling, to create the body and mind that I use
today. Furthermore, if emergent intelligence can be shown to play any role in
guiding this cycling process, then it seems quite likely that if the universe
could, it would tune itself for Wigner’s Ladder to be very easy to climb by
emerging computational systems at every level during the universal unfolding.
This process would ensure that intelligence development, versus all manner of
destructive shenanigans, is a very rewarding, very robust, strongly non-zero-sum
game, at every level of universal development.

Certainly there
seems evidence for this at any system level we observe. The developing brain
is an amazingly friendly environment for our scaffolding neurons to emerge within.
They seem to discover, with very little effort, the complex set of signal transductions
necessary to get them to useful places within the system, all with a surprisingly
simple agent-based model of the environment in which they operate. In another
example, a non-linguistic proto-mammal of 100 million years ago (or today’s
analog), if placed in a room with you today, would develop a surprisingly useful
sense of who you are and what general behaviors you were capable of after only
short exposure, even though it would never figure out your language or your
internal states. Even a modest housefly, after a reasonable period of exposure
to 21st century humans, is rarely so surprised by their behavior
that it dies when poaching their fruit. So it is that all the universe’s pre-singularity
systems internalize quite a bit of knowledge concerning the post-singularity
systems, even if they never understand their internal states. I contend that
human beings, with the greatest ability yet to look back in time to the processes
that create us, have a very powerful ability to look forward as well with regard
to developmental processes. I think we can use this developmental insight to
foretell a lot about the necessary trajectory of the post-singularity systems
on the other side.

Given the empirical
evidence of MEST compression over the last half of the universe’s developmental
history, where the dominant substrates have transitioned from galaxies to stars
to planetary surfaces to biomass to multicellular organisms to conscious hominids
and soon, to conscious technology that will, for an equivalent complexity, be
vastly faster and more compact than our own bodies (which are filled mostly
with housekeeping systems, not computing architectures), it seems almost painfully
obvious to me that the constrained trajectory of all multi-local universal intelligence
has been, to date, one that is headed relentlessly toward inner space, not outer
space. The extension of this trajectory must lead, it seems, to black hole level
energy densities in the foreseeable future. Indeed, some prominent physicists
have drawn surprisingly similar conclusions using lines of reasoning entirely
independent from my own (see Seth Lloyd’s "Ultimate Physical Limits
to Computation," Nature, 2000, and Eric Chaisson’s href="http://www.amazon.com/exec/obidos/ASIN/067400342X/">Cosmic Evolution,
2001).

I call this the
href="http://www.singularitywatch.com/specu.html">developmental singularity hypothesis,
and it is admittedly quite speculative. It is also known as the transcension
scenario, as opposed to the expansion scenario, for the future of local intelligence.
The expansion scenario, the expectation that our human descendants will one
day colonize the stars is, today, an almost universal de facto assumption of
the typical futurist. I consider that model to be 180 degrees incorrect. Outer
space for human science, will increasingly become an informational desert, by
comparison to the simulation science we can run here, in inner space. I suggest
that the cosmic tapestry that we see in the night sky may be most accurately
characterized as the "rear view mirror" on the developmental trajectory
of physical intelligence in universal history. It provides a record of far larger,
far older, and far simpler computational structures than those we are constructing
here, today, in our increasingly microscopic environments.

Let me relate
some personal background on this insight. As a child, I was extremely fortunate
to grow up with a subscription to National
Geographic
magazine. When I discovered that my high school library (Chadwick
School) had issues back to the beginning of the century, it became one of my
favorite haunts. This led to a series of lucky events, including very special
seventh grade history class (Thank you, Mr. Bullin) where we discussed both
universal and human development, and later, an English class where the summer
reading was Charles Darwin’s href="http://www.amazon.com/exec/obidos/tg/detail/-/014043268X/">Voyage of the
Beagle
, 1909. I was a very inconsistent, daydreamer of student in those
days. When I finally got around to reading the Beagle, the story of the
energetic young Darwin wherein he developed the background knowledge that inexorably
led him to his Great Idea, I could not escape the realization that I’d also
discovered a similar great idea myself during all those lazy afternoons, flipping
magazines and thinking.

The idea was essentially
this: every new system of intelligence that emerges in the universe clearly
occupies a vastly smaller volume of space, and plays out its drama using vastly
smaller amounts of matter, energy, and time. At the same time, any who are aware
of the amazing replicative repetitiveness of astronomical features would suspect
that there are likely to be billions of intelligences like ours within it. Yet
we have had no communication from any of them, even from those Sun-like stars,
closer to our own galactic center, which are billions of years older than ours.
This curious situation is called the Fermi Paradox, after Enrico Fermi,
who in the 1940′s, asked the famous question, "Where Are They?," in
relation to these older, putatively far more technologically advanced civilizations.
Contemplating this question in 1972, it struck me that the entire system is
apparently structured so that intelligence inexorably transcends the universe,
rather than expanding within it, and that black holes, those curious entities
that exist both within and without our universe, probably have something central
to do with this process. These simple ideas were the seed of the developmental
singularity hypothesis, and I’ve been tinkering with it ever since.

All this brings
us to the interesting question of the future of artificial intelligence.

Given the background
I have related above, I have the strong suspicion that when our A.I. wakes up,
regardless of what it does in its inner world, it will increasingly transition
into what looks to the rest of the universe like a black hole. This "intelligent"
black hole singularity apparently results from an accelerating process of matter,
energy, space, and time compression (MEST compression) of universal computation,
in the same way that gravitation drives the accelerating formation of stellar
and galactic black hole singularities, which seem to be analogous end states,
in this universe, of much simpler cycling complex adaptive systems.

From our perspective
this may be an entirely natural, incremental, and reversible (at least temporarily)
development, and if it occurs, we will very likely all be taken along for the
ride as well, in a voluntary process of transformation. This "inclusive"
feature of the transition seems reasonable if one makes a chain of presently
thinly-researched assumptions, including: 1) that the A.I.s will have significantly
increased consciousness at or shortly after their emergence, 2) that once they
have modeled us, and all other life forms to the point of real-time predictability
they will be ethically compelled to ubiquitously share this gift, 3) that all
life forms will find such a gift to be irresistible, and 4) by the simple act
of sharing they will turn us into them. This convergent planetary transition
to the postbiological domain would comprise a local "technetic takeover"
as complete as the "genetic takeover" that led to the emergence of
DNA-guided protein synthesis as the sole carrier of higher local intelligence
after biogenesis.

I’ll forgive you
if you think at this point that I’ve taken leave of my senses, and I’m not going
to try to defend these perspectives further here, as that would be beyond the
scope of this interview, and more appropriate to my forthcoming book. But if
you are interested in conducting your own research, consider exploring the link
above, and reading some helpful books that each explore important pieces of
the larger idea. You might start with Lee Smolin’s href="http://www.amazon.com/exec/obidos/tg/detail/-/0195126645/">The Life of the
Cosmos
, 1994, Eric Chaisson’s href="http://www.amazon.com/exec/obidos/tg/detail/-/0674009878/">Cosmic Evolution,
2001, and James Gardner’s href="http://www.amazon.com/exec/obidos/tg/detail/-/1930722222/">Biocosm,
2003. You could also peruse Sheldon Ross’s href="http://www.amazon.com/exec/obidos/ASIN/0125980531/">Simulation,
2001, though that is a technical work. If you have any feedback at that point,
send me an email and let me know what you think.

I remember I first encountered this idea in a science fiction story that
I considered to be entertaining, but closer to fantasy than true science fiction.Â
It did not appear to be grounded in reality. A short time later I was given
a copy of Vernor Vinge’s essay on the singularity and I began to reconsider
whether there might not be something to it. Does the idea of the singularity
originate with Vinge or elsewhere?

In my research
to date, the first clear formulation of the singularity idea originated with
one of America’s earliest technology historians, Henry Adams, in "A
Rule of Phase Applied to History," 1909, the same fortuitous year that
Charles Darwin published Beagle. Readers are referred to our href="http://www.singularitywatch.com/history_brief.html">Brief History of Intellectual
Discussion of the Singularity for more on that amazing story, which mentions
a number of careful thinkers who have illuminated different pieces of the accelerating
elephant in the century since.

Since 1983, as
you mention, the mathematician, computer scientist, and science fiction author
Vernor Vinge has given some of the best brief arguments to date for this
idea. His eight-page internet essay, " href="http://www.ugcs.caltech.edu/~phoenix/vinge/vinge-sing.html">The Coming Technological
Singularity," 1993, is an excellent place to start your investigation
of the singularity phenomenon. I would also recommend my introductory web site,
SingularityWatch.com, and a few others,
such as KurzweilAI.net, which are referenced
at my site.

Here’s a quote from your SingularityWatch web site: "[Research suggests
that] there is something about the construction of the universe itself, something
about the nature and universal function of local computation that permits,
and may even mandate, continuously accelerating computational development in
local environments." This sounds like metaphysics to me. How could a universe
with such properties come to exist? Does this imply some kind of intelligent
design?

That depends very
much on what you consider "intelligence," I think. One initially suspects
some kind of intelligence involved in the continually accelerating emergences
we have observed. In the phase space of all possible universes consistent with
physical law, one wouldn’t find our kind of accelerating, life-friendly universe
in a random toss of the coin, or as various anthropic cosmologists have pointed
out, even in an astronomically large number of random tosses of the coin. Some
deep organizing principles are likely be at work, principles that may themselves
exhibit a self-organizing intelligence over time. Systems theorists look for
broad views to get some perspective on this question, so bear with me as we
consider an abstract model for the dynamics that may be central to the issue.

Everything really
interesting in the known universe appears to be a replicating system. Solar
systems, complex planets, organic chemistry, cells, multicellular organisms,
brains, languages, ideas, and technological systems are all good examples. Each
undergoes replication, variation, interaction, selection, and convergence, in
what may be called an RVISC developmental cycle. Given this extensive zoology,
it is most conservative, most parsimonious to assume that the physical universe
we inhabit is just another such system.

Big bang theorists
tell us the universe had a very finite beginning. Since 1998, lambda energy
theorists have told us that our 13.7 billion year universe is already one billion
years into an accelerating senescence, or death. Multiverse cosmologists tell
us that ours is just one of many universes, and some, such as Lee Smolin,
Alan Guth, and Andrei Linde, have suggested that black holes are
the seeds of new universe creation. If so, that would make this universe a very
fecund replicator, as relativity theory predicts at least 100 trillion black
holes to be in existence at the present time.

For each of the
above reproducing complex adaptive systems (CASs, in John Holland’s use
of the term), there are at least two important mechanisms of change we need
to consider: evolution and development. Evolution involves the Darwinian mechanisms
of variation, interaction, and selection, the VIS in the middle of the RVISC
cycle. Development involves statistically deterministic mechanisms of replication
and convergence, the "boundaries" of the RVISC reproduction cycle
for any complex system.

Consider human
beings. Our intelligence is both evolutionary and developmental. Each of us
follows an evolutionary path, the unique memetic (ideational) and technetic
(tools and technologies) structures that we choose to use and build. (As individuals
we also follow a genetic evolutionary path, but this is so slow and constrained
that it has become future-irrelevant in the face of memetic and technetic evolution.)
At the same time, we must all conform to the same fixed developmental cycle,
a 120-year birth-growth-maturity-reproduction-senescence-death Ferris wheel
than none of us can appreciably alter, only destroy. The special developmental
parameters, the DNA genes that guide our own cycle, were tuned up over millions
of years of recursive evolutionary development to produce brains capable of
complex behavioral mimicry memetics, and then linguistic mimicry memetics, astonishing
brains that now cradle our own special self-awareness.

Now contemplate
our own universe, and imagine as Teilhard de Chardin did with his intriguing
"cosmic embryogenesis" metaphor, that it is an evolutionary developmental
entity with a life and death of its own. In fact, heat death theorists have
known the universe has a physical lifespan for almost two centuries, but we,
thinking like immortal youth, still commonly ignore this. Multiverse models
explore how replicating universes might tune up their developmental genes, over
successive cycles, to usefully use the intelligence created within the "soma"
(body, universe), in the same way that human genes have tuned up to use human
intelligence and finite human lifespan in their own replication. See Tom
Kirkwood’s
work on the href="http://avsunxsvr.aeiveos.com/agethry/disoma/">Disposable Soma Theory,
in Time
of our Lives
, 1999, for one very insightful explanation of the dynamic.

Next, consider
this: If encoded intelligence usefully influences the replication that occurs
in the next developmental cycle, and we can make the case that it always would,
by comparison to otherwise random processes, then universes that encode the
emergence of increasingly powerful universe-modeling intelligence will always
outcompete those that don’t, in the multiversal environment.

When I relay these
thoughts to patient listeners, a question commonly occurs. Why wouldn’t universes
emerge which seek to keep cosmic intelligence around forever? This question
seems equivalent to asking why it is that our genes "choose" to continue
to throw away our adult forms in almost all higher species in competitive environments.
The answer likely has to do with the fact that any adult structure has a fixed
developmental capacity, based on the potential of its genes, and once the capacity
has been expressed and accelerating intelligence is no longer occurring in the
adult form, it becomes obvious that the adult structure is just not that smart
in relation to the larger universe. At that point, recycling becomes a more
resource efficient computing strategy than revising. Let’s propose that the
A.I.’s to come, even as they rapidly learn what they can within this universe,
remain of sharply fixed complexity, while operating in a much larger, Gödelian-incomplete
multiverse. As long as that multiverse continues to represent a combinatorial
explosion of possibilities, universal computing systems will likely remain stuck
on a developmental cycle, trading off between phases of parameter-tuning reproduction
and intelligence unfolding. Both of these stages of the cycle incorporate evolution
and development. Another way that systems theorists have explored the yin-yang
of this cycle is in terms of Francis Heylighen and Donald Campbell’s
insights on href="http://pespmc1.vub.ac.be/DOWNCAUS.html">downcausality (including parameter
tuning) and upcausality (including hierarchical emergence), useful extensions
of the popular concepts of holism and reductionism.

If we live in
a universe populated by an "ecology of black holes," as I suspect,
then we will soon discover that most of them, such as galactic and stellar gravitational
black holes, can only reproduce universes of low complexity. In a paradigm of
self-organization, of iterative evolutionary development, these cycling complex
adaptive systems may be the stable base, the lineage out of which our much more
impressively intelligence-encoding universe has emerged, in the same way that
we have been built on top of a stable base of cycling bacteria. How long our
own universe will continue cycling in its current form is anyone’s guess, at
present. But we may note that in living systems, while developmental cycles
can continue for very long periods of time, they are never endless in any particular
lineage. So it may be that recurrence of the "type" of universe we
inhabit also has a limited lifespan, before it becomes another "type."

Fortunately, all
of this should become much more tractable to proof by simulation, as well as
by limited experiment, in coming decades. As you may know, high energy physicists
are already expecting that we may soon gain the ability to probe the fabric
of the multiverse via the creation of so-called "extreme black holes"
of microscopic size in the laboratory (e.g., CERN’s Large Hadron Collider),
possibly even within the next decade. At the same time, black hole analogs for
capturing light, electrons, and other quanta are also in the planning stages.
With regard to microcosmic reality, I find that truth is always more interesting
than fiction, and often less believable, at first blush.

Using various
forms of the above model, James N. Gardner, Bela Balasz, Ed Harrison,
myself, and a handful of others have proposed that our human intelligence
may play a central role in the universal replication cycle. In the paradigm
of evolutionary development, that would make our own emergence—but not our evolutionary
complexities—developmentally tuned, via many previous cycles, into our universal
genes.

This gene-parameter
analogy is quite powerful. You wouldn’t say that any reasonable amount of your
adult complexity is contained in the paltry 20,000-30,000 genes that created
you. In fact the developmental genes that really created you are a small
subset of those, numbering perhaps in the hundreds. These genes don’t
specify most of the complexity contained in the 100 trillion connections in
your brain. They are merely developmental guides. Like the rules of a low-dimensional
cellular automata, they control the envelope boundaries of the evolutionary
processes
that created you. So it may be with the 20-60 known or suspected
physical parameters and coupling constants underlying the Standard Model of
physics, the parameters that guided the Big Bang. They are perhaps best seen
as developmental guides, determining a large number of emergent features, but
never specifying the evolution that occurs within the unfolding system.

As anthropic cosmologists
(those who suspect the universe is specifically structured to create life) are
discovering, a number of our universal parameters (e.g., the gravitational constant,
the fine structure constant, the mass of the electron, etc.) appear to be very
finely tuned to create a universe that must develop life. As cosmology delves
further into M-Theory, anthropic issues are intensifying, not subsiding. Some
theorists, such as Leonard Susskind, have estimated that there are an
incredibly large number of string
theory vacua
from which our particular universal parameters were somehow
specified to emerge.

If you wish to
understand just how powerful developmental forces are, think not only of Stephen
Jay Gould’s
" href="http://www.amazon.com/exec/obidos/tg/detail/-/0393308197/">Panda’s Thumb"
1992, which provides an orthodox explanation of evolutionary process, but think
also of what I call "The Twin’s Thumbprints," an example that explains
not evolution, but the more fundamental paradigm of evolutionary development.
Look closely at two genetically identical human twins, and tell me what you
see.

Virtually all
the complexity of these twins at the molecular and cellular scale has been randomly,
chaotically, evolutionarily constructed. Their fingerprints, cellular microachitecture
(including neural connections), and thoughts are entirely different. Yet they
look similar, age similarly, and even have 40-60% correlation in personality,
as several studies of separated twins have shown. That is an amazing level of
nonrandom convergence to tune into such simple initial parameters. Both twins
predictably go into puberty thirteen years later, after a virtually endless
period involving astronomical numbers of interactions at the molecular scale.

So it apparently
is with our own universe’s puberty, which occurred about 12.7 billion years
after the Big Bang, about 1 billion years ago. Earth’s intelligence is apparently
one of hundreds of billions of ovulating, self-fertilizing seeds in our universe,
one that is about to transcend into inner space very soon in cosmologic time.

One of the testable
conclusions of the developmental singularity hypothesis is that the parametric
settings for our universe are carefully tuned to support not simply the statistical
emergence of complex chemistry and occasional life, but a generalized relentless
MEST compression of computational systems in a process of accelerating hierarchical
emergence, a process that must develop accelerating local intelligence, interdependence,
and immunity (resiliency) on virtually all of the billions of planets in this
universe that are capable of supporting life for billions of years. This life
in turn is very likely to develop a technological singularity, and in some cosmologically
brief time afterward, to follow a constrained trajectory of universal transcension.

Most likely, this
transition leads to a subsequent restart of the developmental cycle, which would
provide the most parsimonious explanation yet advanced for how the special parameters
of our universe came to be. As with living systems, these parameters were apparently
self-organized, over many successive cycles, not instantiated by some entity
standing outside the cycle, but influenced incrementally by the intelligence
arising within it. In this paradigm, developmental failures are always possible.
But curiously, they are rarer, in a statistical sense, the longer any developmental
process successfully proceeds. Just look at the data for spontaneous abortions
in human beings, which are increasingly rare after the first trimester, to see
one obvious example.

But even if all
this speculation is true, we must realize that this says little about our evolutionary
role. Remember, life greatly cherishes variation. There is probably a very deep
computational reason why there are six billion discrete human beings on the
planet right now, rather than one unitary multimind. Consider that every one
of the developmental intelligences in this universe is, right now, taking its
own unique path down the rabbit hole, and they are all separated by vast distances,
planted very widely in the field, so to speak, to carefully preserve all that
useful evolutionary variation. I find that quite interesting and encouraging.
Free will, or the protected randomness of evolutionary search at the "unbounded
edge" between chaos and control in complex systems, always seems to be
central to the cycle at every scale in universal systems.

Now it is appropriate
to consider another commonly-asked question with regard to these dynamics. How
likely is it, by becoming aware of a cosmic replication cycle and our apparent
role in it, that we might alter the cycle to any appreciable degree?

To answer this,
it may also be helpful to realize that complex adaptive systems are always aware
that many elements of their world are constrained to operate in cycles (day/night,
wake/sleep, life/death, etc.). So it’s only an extension of prior historical
insight if we soon discover that our universe is also constrained to function
in the same manner. It may help to remember that long before human society had
theories of progress (after the 1650′s), and of accelerating progress (after
the singularity hypothesis, beginning in the 1900′s), cyclic cosmologies and
theories of social change were the norm. Even a mating salmon is probably very
aware of his own impending demise in the cycle of life. They certainly expend
their energy in ways that are entirely purposeful in that regard.

But awareness of a cycle, in any of these or other examples, does not allow
us to escape it. Or if we think we do, as in the transferring our biological
bodies to cybernetic systems to avoid biological death, we will likely discover
that the same life/death cycles continues to operate that the scale that we
hold most dear, which at that time will no longer be our physical bodies, but
the realm of our higher thoughts, perennially struggling in algorithmic cycles
of evolutionary development, death and life, erasure and reconstitution. As
personal development theorist Stephen Covey ( href="http://www.amazon.com/exec/obidos/tg/detail/-/0671708635/">Seven Habits
of Highly Effective People
, 1990) is fond of saying, you cannot break
fundamental principles, or laws of nature. You can only break yourself against
them, if you so choose. So it is that I don’t have any expectation that our
local intelligence could be successful in escaping the cosmic replication cycle.
I think that insight is valuable for predicting several aspects of the shape
of the future.Â

For example, every
scenario that has ever been written about humans "escaping to the stars"
ignores the accelerating intelligence that would occur onboard the ship. Such
civilizations must lead, in a very short time, to technological singularities
and, in the developmental singularity hypothesis, to universal transcension.
As Vernor Vinge says, it is very hard to "write past the singularity,"
and in this regard he has referred both to technological and developmental types.

Alternative scenarios
of constructing signal beacons, or nonliving, fixed-intelligence robotic probes
to spread an href="http://en2.wikipedia.org/wiki/Encyclopedia_Galactica">Encyclopedia Galactica,
as Carl Sagan once proposed, ignore the massive reduction in evolutionary
variation that would result. This strategy would effectively turn that corner
of the galaxy into an evolutionarily sterile monoculture, condemning all intelligent
civilizations in the area to go down the hole in the same way we did, and all
developmental singularities in the vicinity to be of the same type. If I am
right, our information theory will soon be able to conclusively prove that all
such one-way communications can only reduce total universal complexity, and
are to be scrupulously avoided.

In conclusion,
I don’t think we can get around cyclic laws of nature, once we discover them.
But they can give us deep insight into how to spend our lives, how to surf the
tidal waves of accelerating change toward a more humanizing, individually unique,
and empowering future.

Much of this sounds
quite fantastical, so let me remind you that these are speculative hypotheses.
They will stand or fall based on much more careful scientific investigation
in coming years. Attracting that investigation is one of the goals of our organization.

If, as Ray Kurzweil has suggested, intelligence is developing on its own
trajectory—first in a biological substrate and now in computers—is there an inevitability to the singularity that makes speculating
about it superfluous? Is there really anything we can do about it one way or
the other?

Certainly you
can’t uninvent math, or electricity, or computers, or the internet, or RFID,
once they arrive on the scene. Anyone who looks closely notices a surprising
developmental stability and irreversibility to the acceleration.

But we must remember
that developmental events are only "statistically deterministic."
They often occur with high probability, but only when the environment is appropriate.
Developmental failure, delay, and less commonly, acceleration can also occur.

Speaking optimistically,
I strongly suspect that there is little we could do to abort the singularity,
at this very late stage in its cosmic development. It appears to me that that
we live in a "Child Proof Universe," one that has apparently self-organized,
over many successive cycles, to keep many of the worst destructive capacities
out of the hands of impulsive children like us.

This is a controversial
topic, so I will mention it only briefly, but suffice it to say that after extensive
research I have concluded that no biological or nuclear destructive technologies
that we can presently access, either as individuals or as nations, could ever
scale up to "species killer" levels. All of them are sharply limited
in their destructive effect, either by our far more complex, varied, and overpowering
immune systems, in the biological case, or by intrinsic physical limits—combinatorial
explosion of complexity in designing multistage fission-fusion devices—in the
nuclear weapons case. These destructive limits may exist for reasons of deep
universal design. A universe that allowed impulsive hominids like us an intelligence-killing
destructive power wouldn’t propagate very far along the timeline.

Speaking pessimistically,
I’m sure we could do quite a bit to delay the transition, by fostering a series
of poorly immunized catastrophes. If events take an unfortunate and unforsighted
turn, our planet might suffer the death of a few million human beings at the
hands of poorly secured and monitored destructive technologies, perhaps even
tens of millions, in the worst of the credible terrorist scenarios. But I am
of the strong opinion that we will never again see the 170 million deaths, due
to warfare and political repression, that occurred during the 20th
century. See Zbignew Brezinski’s href="http://www.amazon.com/exec/obidos/tg/detail/-/0684826364/">Out of Control,
1995, for an insightful accounting of the excesses of that now fortunately bygone
era. We are on the sharply downsloping side of the global fatality curve, and
we can thank information and communications technologies for that, more than
any other single factor in the world.

Today, we live
in the era of instant news, electronic intelligence and violence that is increasingly
surgically minimized, by an increasingly global consensus. Even with our primitive,
clunky, first generation internet and planetary communications grid, I feel
our planet’s technological immune systems have become far too strong and pluralistic,
or network-like, for the scale of political atrocities of the twentieth century
to ever recur. Yet conflict and exploitation will continue to occur, and we
could certainly choose a dirty, self-centered, nonsustainable, environmentally
unsound approach to the singularity. Catastrophes can and will continue to recur.
I hope for all our sakes that they are minimized, and that we learn from them
as rapidly and thoroughly as possible.

Unlike a small
minority of aggressive transhumanists, I applaud the efforts we are making to
create a more ecologically sustainable, carefully regulated world of science
and technology. Wherever we can inject values, sensitivity, accountability into
our sociotechnological systems, I think that is a wonderful thing. I’d love
to see the U.S. take a greener path to technology development, the way several
countries in Europe have. I’m also pragmatic in realizing that most social changes
we make will be more for our own peace of mind, and would have little effect
on the intrinsic speed of our global sci-tech advances, on the rate of the increasingly
human-independent learning going on in the ICT architectures all around us.

I consider such
moves to be more reflections on how we walk the path, choices that will in most
cases do very little to delay the transition. I also do not think it is valuable
to hold the perspective that we should get to the singularity as fast as we
can, if that path would be anything other than a fully democratic course. There
are many fates worse than death, as all those who have freely chosen to die
for a cause have realized over the centuries. There are many examples of acceleration
that come at unacceptable cost, as we have seen in the worst political excesses
of the twentieth century. No one of us has a privileged value set.

So perhaps most
importantly, we need to remember that the evolutionary path is what we control,
not the developmental destination. That’s the essence of our daily moral choice,
our personal and collective freedom. We could chart a very nasty, dirty, violent,
and exploitative path to the singularity. Or with good foresight, accountability,
and self-restraint, we could take a much more humanizing course. I am a cautious
optimist in that regard.

Christine Peterson recently told me that artificial intelligence represents
the one future development about which she has the most apprehension. It can
come the closest of any scenario to Bill Joy’s "the future that doesn’t
need us." If the coming of the singularity means the ascendancy of machine
intelligence and the end of the human era, shouldn’t we all be doing what we
can to prevent it from happening?

Ah yes, the Evil Killer Robots scenario. Some of my very clever transhumanist
colleagues worry quite a bit about "Friendly AI." I’m glad to have
friends that are carefully exploring this issue, but from my perspective their
worries seem both premature and cautiously overstated. I strongly suspect that
A.I.s, by virtue of having far greater learning ability than us, will be, must
be, far more ethical than us. That is because I consider ethics to be an emergent
computational interdependence, a mathematics of morality, a calculus of civilization
that is invariably discovered by all complex adaptive systems that function
as collectives. And anything worthy of being called intelligent always functions
as a collective, including your own brain. Today’s cognitive scientists are
discovering the evolutionary ethics that have become self-encoded in all known
complex living systems, from octopi to orangutans, from guppies to gangsters.
For more on this intriguing perspective, see such works as Robert Axelrod’s
href="http://www.amazon.com/exec/obidos/tg/detail/-/0465021212/">The Evolution
of Cooperation
, 1985, Matt Ridley’s
href="http://www.amazon.com/exec/obidos/tg/detail/-/0140264450/">The Origins of
Virtue
, 1998, and Robert Wright’s
href="http://www.amazon.com/exec/obidos/tg/detail/-/0679758941/">Non-Zero,
2001.

This optimism
isn’t enough, of course. We humans had to go through a nasty, violent, and selfish
phase before we became today’s semi-civilized simians. How do we know computers
won’t have to do the same thing? I think the answer to this question is that
at one level, Peterson’s intuitions are probably right. Tomorrows partially-aware robotic systems and
A.I.s will have to go through a somewhat unfriendly, dangerous phase of "insect
intelligence." As Jeff
Goldblum
reminded us in David Cronenberg’s, The
Fly
, insects are brutal, they don’t compromise, they don’t have compassion.
Their politics, as E.O. Wilson’s
href="http://www.amazon.com/exec/obidos/tg/detail/-/0674002350/">Sociobiology,
1975/2000 reminds us, are quite comfortable with brute force. That’s a potentially
dangerous developmental stage for an A.I. You wouldn’t want that kind of A.I.
running your ICU, or your defense grid. Or your nanoassembler machines.

But you would
very likely let such a system run the robotics in a manufacturing plant, especially
if evolutionary systems have proven, as they are already demonstrating today,
to be far more powerfully self-improving, self-correcting, and economical than
our top down, human-designed software systems. That plant, of course, would
be outfitted and embedded within a much larger matrix of technological fire
extinguishers, an immune system capable of easily putting out any small fires
that might develop.

But with a learning curve that is multi-millionfold faster than ours, I expect
that "insect transition" to last weeks or months, not years, for any
self-improving electronic evolutionary developmental system. You can be sure
these systems will be well watched over by a bevy of A.I. developers, and those
few catastrophes that occur to be carefully addressed by our cultural and technological
immune systems. It’s easy to underestimate the extent and effectiveness of immune
systems, they aren’t obvious or all that sexy, but they underlie every intelligent
system you can name. Computer scientist Diana Gordon-Spearsand others
have already organized conferences on "Safe Learning Agents," for
example, and we have only just begun to build world-modeling robotics. We’re
still several decades away from anything self-organizing at the hardware level,
anything that could be "intentionally" dangerous.

We also need
to remember that humans will be practicing artificial selection on tomorrow’s
electronic progeny. That is a very powerful tool, not so much for creating complexity,
but for pruning it, for ensuring symbiosis. We’ve had 10,000 years of artificial
selection on our dogs and cats. Their brain structures are black boxes to us,
and yet we find very few today that will try to grab human babies when the parents
are not looking. Again, those few that do are taken care of by immune systems
(we don’t continue to breed such animals, statistically speaking).

In short, I expect
human society will coexist with many decades of very partially aware AI’s, beginning
some time between 2020-2060, which will give us ample time to select for stable,
friendly, and very intimately integrated intelligent partners, for each
of us. Hans Moravec
( href="http://www.amazon.com/exec/obidos/tg/detail/-/0195136306/">Robot,
1999) has done some of the best writing in this area, but even he sometimes
underestimates the importance of the personalization that will be involved.
As a species, humanity would not let the singularity occur as rapidly as it
will without personally witnessing the accelerating usefulness of A.I. interacting
with us in all aspects of our lives, modeling us through our LUI systems, lifecams,
and other aspects of the emerging electronic ecology.

By contrast,
every scenario of "fast takeoff" or A.I. emergence that I’ve ever
seen, the heroic individual toiling away in the lab at night to create HAL-9000,
just doesn’t seem to understand the immense cycles of replication, variation,
interaction, selection, and convergence in evolutionary development that are
always required to create intelligence in both a bottom-up and top-down fashion.
Since the 1950s, almost all the really complex technologies we’ve created have
required teams, and there is presently nothing in technology that is as remotely
complex as a mammalian brain.

As I mention
on my website, I think we are going to have to see massively parallel hardware
systems, directed by some type of DNA-equivalent parametric hardware description
language, unfolding very large, hardware-encoded neural nets and testing them
against digital and real environments in very rapid evolutionary developmental
cycles, before we can tune up a semi-intelligent A.I. The transition will likely
require many teams of individuals and institutions, integrating bottom-up and
top-down approaches, and be primarily a hardware story, and only secondarily
a software story, for a number of reasons.

Bill Joy, in href="http://www.wired.com/wired/archive/11.12/billjoy.html">Wired12.2003,
notes that we can expect a 100X increase (6-7 doublings) in general hardware
performance over the next ten years, and a 10X increase in general software
(e.g., algorithmic) performance. While certain specialized areas, like computer
graphics chips may run faster (or slower), on average this sounds about right.
Note the order of magnitude difference in the two domains. Hardware has always
outstripped software because, as I’ve said earlier, it seems to be following
a developmental curve that is more human discovered than human created. It is
easier to discover latent efficiencies in hardware vs. software "phase
space", because the search space is much more directed by the physics of
the microcosm. Teuvo Kohonen, one of the pioneers of neural networks,
tells me that he doesn’t expect the neural network field to come into maturity
until most of our nets are implemented in hardware, not software, a condition
we are still at least a decade or two away from attaining.

The central problem
is an economic one. No computer manufacturer can begin to explore how to create
biologically-inspired, massively parallel hardware architectures until our chips
stop their magic annual shrinking game and have become maximally-miniaturized
(within the dominant manufacturing paradigm) commodities. That isn’t expected
for at least another 15 years, so we’ve got a lot of time yet to think about
how we want to build these things.

If I’m right,
the first versions of really interesting A.I.s will likely emerge on redundant,
fault tolerant evolvable hardware "Big Iron" machines that take us
back to the 1950s in their form factor. Expect some of these computers to be
the size of buildings, tended by vast teams of digital gardeners. Dumbed-down
versions of the successful hardware nets will be grafted into our commercial
appliances and tools, mini-nets built on a partially reconfigurable architecture,
systems that will regularly upgrade themselves over the Net. But even in the
multi-millionfold faster electronic environment, a bottom-up process of evolutionary
development must still require decades, not days, to grow high-end A.I.. And
primarily top-down A.I. designs are just flat wrong, ignorant of how complexity
has always emerged in physical systems. Even all of human science, which some
consider the quintessential example of a rationally-guided architecture, has
been far more an inductive, serendipitous affair than a top-down, deductive
one, as James Burke
( href="http://www.amazon.com/exec/obidos/tg/detail/-/0316116726/">Connections,
1995) delights in reminding us.

So, when one
of the first generation laundry folding robots in 2030 folds your cat by accident,
we’ll learn a tremendous amount about how rapidly self-correcting these systems
are, how quickly, with minor top-down controls and internet updates, we can
help them to improve their increasingly bottom-up created brains. Unlike today’s
still-stupid cars, for example, which currently participate in 40,000 American
fatalities every year, tomorrows LUI-equipped, collision avoiding, autopiloting
vehicles will be increasingly human friendly and human protecting every year.
This encoded intelligence, this ability to ensure increasingly desirable outcomes,
is what makes a Segway so fundamentally
different from a bicycle. Segway V, if it arrives, would put out a robotic hand
or an airbag to protect you from an unexpected fall. So it will be with your
PDA of 2050, but in a far more generalized sense.

In a related point,
I also wouldn’t worry too much about the loss of our humanity to the machines.
Evolution has shown that good ideas always get rediscovered. The eye, for example,
was discovered at least thirty times by some otherwise very divergent genetic
pathways. As Simon Conway Morris eloquently argues ( href="http://www.amazon.com/exec/obidos/tg/detail/-/0521827043/">Life’s Solution,
2003) every single aspect of our human-ness that we prize has already been independently
emulated to some degree, by the various "nonhuman" species we find
on this planet. Octopi are so smart, for example, that they build houses, and
learn complex behavior (e.g., jar-opening) from each other even when kept in
adjacent aquaria.

This leads us
to a somewhat startling realization. Even if, in the most abominably unlikely
of scenarios, all of humanity were snuffed out by a rogue A.I., from a developmentalist
perspective it seems overwhelmingly likely that good A.I.s would soon emerge
to recreate us. Probably not in the "Christian rapture" scenario envisioned
by transhumanist Frank Tipler in href="http://www.amazon.com/exec/obidos/tg/detail/-/0385467990/">The Physics of
Immortality
, 1997, but certainly our informational essence, all that
we commonly hold dear about ourselves.

How can we even
suspect this? Humanity today is doing everything it can to unearth all that
came before us. It is in the nature of all intelligence to want to deeply know
its lineage, not just from our perspective, but from the perspective of the
prior systems. If the world is based on physical causes, then in order to truly
know one understands the world, one must truly know, and be able to understand
at the deepest level, the systems in which one is embedded, the systems from
which one has emerged, in a continuum of developmental change. The past is always
far more computationally tractable than what lies ahead.

That curiosity
is a beautiful thing, as it holds us all tightly interdependent, one common
weave of the spacetime fabric, so to speak.

That’s why we
are already spending tens of millions of dollars a year trying to model the
way bacteria work, trying to predict, eventually in real-time, everything they
do before they even do it, so that we know we truly understand them. That’s
why emergent A.I. will do the same thing to us, permeating our bodies and brains
with its nanosensor grids, to be sure it fully understands its heritage. Only
then will we be ready to make the final transition from the flesh.

Also on your website, I read that the singularity will occur within the
next 40 to 120 years. Isn’t that kind of broad range? What’s your best guess
on when it will occur?

I find that those
making singularity predictions can be usefully divided into href="http://www.singularitywatch.com/explore.html">three camps: those predicting
near term (now to 2029), mid-term (2030-2080), and longer term (2081-2150+)
emergence of a generalized greater-than-human intelligence. Each group has somewhat
different demographics, which may be interesting from an anthropological perspective.

I think the range
is so broad because the future is inherently unpredictable and under our influence.
It is also true that none of us has yet developed a popular set of quantitative
methodologies for thinking rigorously about these things. Very little money
or attention has been given to them. If you’d like to send a donation to our
organization to help in that regard, let us know.