There's Always the Space Ark

By | April 14, 2005

InstaPundit links this morning to the Guardian’s Top Ten List of doomsday predictions. My take is that these problems are all more or less fixable, with a little ingenuity. Let’s have a look.

1: Climate Change

By the end of this century it is likely that greenhouse gases will have doubled and the average global temperature will have risen by at least 2C. This is hotter than anything the Earth has experienced in the last one and a half million years.

Well, no. According to the Klima Climate Change Center:

Changes in our climate have been occuring naturally. Ice core data have shown that the surface temperatures for the past 420,000 years have been following a steady rhythm of warming and cooling, suggesting that climate is affected by natural forcings and feedbacks.The global temperature record of the last 420,000 years shows that the amplitude of warm and cold (interglacial and ice age) events does not go beyond 8 °C.

So a 2 °C shift is well within the experience of the planet within the past 42,000 years, never mind the past 2 million. Will the 2 °C rise bring about devastation? Will it even happen?

Consider this possibility: maybe global warming is the only thing standing between us and the next ice age.

The solution to the Global Warming Doomsday scenario is twofold:

– Elimiate the politics and hysteria from the discussion

Then define appropriate action


Stephen here – I was preparing a post on the same subject, but Phil beat me to the punch. My thoughts will be italicized.

There is a general concensus amoung scientists that the world is warming and that human activity is part of the problem. It is my belief that (whether fortunate or unfortunate) world petroleum production will soon peak. Petroleum prices will rise until alternatives become attractive. As those alternatives are adopted, they will be developed – made more efficient, and cheaper.

Many of these alternatives are also cleaner for the environment. I am thinking of mainly of hydrogen, but helium-3 and even cold fusion remain possibilities.


2: Telomere Erosion

My theory is that there is a tiny loss of telomere length from one generation to the next, mirroring the process of ageing in individuals. Over thousands of generations the telomere gets eroded down to its critical level. Once at the critical level we would expect to see outbreaks of age-related diseases occurring earlier in life and finally a population crash.

The telomere-shortening problem has been identified as one of the Seven Deadly Causes of Aging. Top minds are already at work on how to solve this problem for individuals. If Stindl’s (highly speculative)theory of species-level telomere-shortening pans out, I’m sure that whatever we come up with to address the problem for individuals will have some applicability at the species level.

If so, we’ll have yet another example of life-extension research providing “practical” (yeah, like there’s nothing “practical” about extending human life) benefits.


I think this theory is bogus. I’ve never heard of an upper limit for the life expectancy of a species in the fossil record. If this were a problem why do we have turtles?

But if I’m wrong and our species were due for telomere extinction in the next century, I’m convinced we would be able to engineer our way out the crisis. Provided some other catastrophe doesn’t set us back to the Stone Age, there will be a technological solution to this problem long before it became an issue.


3: Viral Pandemic

Within the last century we have had four major flu epidemics, along with HIV and Sars. Major pandemics sweep the world every century, and it is inevitable that at least one will occur in the future. At the moment the most serious concern is H5 avian influenza in chickens in south-east Asia.

There are some specific, practical steps we can take concerning avian strains of the flu from Asia.

Longer term, nanotechnology looks like our best pandemic defense.


We’re overdue for a bad flu (whether from nature or some imprudent research accident) – which is a virus. I’m also concerned that we are currently losing the antibiotics arms race against bacterial infections.

Back in 2003 Glenn Reynolds wrote briefly about “peptide nanotubes that kill bacteria by punching holes in the bacteria’s membrane… By controlling the type of peptides used to build the rings, scientists are able to design nanotubes that selectively perforate bacterial membranes without harming the cells of the host.”

This is an exciting development because bacteria would have trouble adapting to this kind of attack. As long as these nanotubes could differentiate between bacterial membranes and host membranes and penetrate bacteria, it would be lethal to bacteria.


4: Terrorism

Today’s society is more vulnerable to terrorism because it is easier for a malevolent group to get hold of the necessary materials, technology and expertise to make weapons of mass destruction. The most likely cause of large scale, mass-casualty terrorism right now is from a chemical or biological weapon.

Well, call me crazy, but I think what’s called for here is an outright global war on terrorism. We could even call it that — The War on Terror. The Guardian doesn’t mention whether any such solution has been considered.

Also, like with item 1, I think it’s important for us all to truly understand the problem.


The Guardian exposed their bias by failing to mention the War on Terror. I recommend this article to those who would like to “understand the problem.”


5: Nuclear War

In theory, a nuclear war could destroy the human civilisation but in practice I think the time of that danger has probably passed.

Solution: End communism in Europe. Done. Problem solved. Hey, who am I to argue with Air Marshal Lord Garden?


It wouldn’t take an all-out nuclear war to set our civilization back twenty years. Those who think the United States has overreacted to 9/11 would think back fondly on these carefree days were a nuke to explode within our borders.

The risk of a state actor in such an attack is low. This is one big reason why it’s so important for the U.S. to win the War on Terror. Certainly Al Qaeda wouldn’t hesitate to use a nuke against us if they could get one. Our risk is reduced if they’re busy running and dying.


6: Meteroite Impact

Over very long timescales, the risk of you dying as a result of a near-Earth object impact is roughly equivalent to the risk of dying in an aeroplane accident. To cause a serious setback to our civilisation, the impactor would have to be around 1.5km wide or larger.

A nasty potential problem, to be sure. While some folks are working on how best to assess and categorize the risk, others are developing actual solutions to the problem. Note that the proposed “tugboat” method is superior to the nuclear options we looked at a few years ago. Blasting an asteroid into little bits just means that we’ll be bombarded by thousands of little meteorites rather than one big one.



We are now entering a time when it would be possible to deflect or destroy threatening objects. Our ability to protect ourselves should improve over time barring some other catastrophe.


7: Robot Takeover

Robot controllers double in complexity (processing power) every year or two. They are now barely at the lower range of vertebrate complexity, but should catch up with us within a half-century. By 2050 I predict that there will be robots with humanlike mental power, with the ability to abstract and generalise.

Such robots need not become a threat, but they could. Glenn Reynolds and Scott Burgess are having some fun with the idea of “Robot Overlords,” but this is actually a fairly serious risk. Fortunately, as with Telomere shortening and asteroid impacts, there are excellent minds looking at the problem. The solution seems to be to make robots as friendly as we can while we’re still the ones making them. (Before they start making themselves.)


If strong A.I. is possible, then I would expect it to be achieved this century. Once achieved it would not be possible to prohibit it. The value is such that prohibition simply wouldn’t work.

What might work is a requirement that strong A.I. always be tied to a human. Symbiotic A.I. would help ensure that humanity is transformed rather than supplanted.


8: Cosmic Ray Blast

Once every few decades a massive star from our galaxy, the Milky Way, runs out of fuel and explodes, in what is known as a supernova. Cosmic rays (high-energy particles like gamma rays) spew out in all directions and if the Earth happens to be in the way, they can trigger an ice age.

Well, first off, I just have to ask — if it’s going to trigger an ice age, isn’t this our best hope against global warming?

Okay, seriously, we know we don’t want to be exposed to all that radiation. But what can protect us from a star exploding light years away? Here are three thoughts:

– Figure out a way to destroy, without unleashing the cosmic rays, all stars in the immediate vicinity. (Raises major ethical and logistical issues.)

– Figure out a way to move our solar system safely out of the galaxy. (Still logisitically difficult, but ethically okay.)

– Build a protective Dyson Sphere around the solar system (Cheap, safe, and easy!)

Also note that a smaller-scale Dyson sphere might be used to protect us from Asteroid collisions.


Supernovas are out of human control and so aren’t worth worrying about. Even an interstellar civilization would be destroyed unless it was spread over distant stars.


9: Super-Volcano

Approximately every 50,000 years the Earth experiences a super-volcano. More than 1,000 sq km of land can be obliterated by pyroclastic ash flows, the surrounding continent is coated in ash and sulphur gases are injected into the atmosphere, making a thin veil of sulphuric acid all around the globe and reflecting back sunlight for years to come. Daytime becomes no brighter than a moonlit night.

Based on my detailed analysis of the recent made-for-TV movie on the subject, I think our best bet is to create some kind of release valve for these systems. The pressure in a caldera system builds and builds until the gasses and magma begin to vent, leading to massive explosions. But what if we created a way for the system to vent slowly over a period of years? Obviously, this would require some engineering beyond what we currently have. (But it would be significantly easier to do than the Dyson Sphere.)

Also, slowly releasing volcanic ash into the atmosphere might be another way to offset global warming.


A Super-volcano is out of our control, but humanity might survive if it had a permanent off-world presence. In fact, a permanent off-world presence would protect humanity from extinction in all but the supernova, artificial black hole, and telomere erosion scenarios.


10: Earth Swallowed by Black Hole

Around seven years ago, when the Relativistic Heavy Ion Collider was being built at the Brookhaven National Laboratory in New York, there was a worry that a state of dense matter could be formed that had never been created before. At the time this was the largest particle accelerator to have been built, making gold ions crash head on with immense force. The risk was that this might form a stage that was sufficiently dense to be like a black hole, gathering matter from the outside.

If we’re worried about being swallowed by home-made black holes, my suggestion is that we don’t make any. If we’re worried about rogue black holes wandering in from deep space, the solution will probably lie with some kind of combination of the solutions for problems 6 and 8.

Failing that, there’s always the Space Ark.


The black hole scenario is uniquely scary. Most of these other risks leave the possibility of some survivors. Not this. Even an off-world presence wouldn’t necessarily save us – Mars wouldn’t be far enough to avoid disaster.

Apparently most physicists don’t think this is a real danger. I hope they’re right.



  • http://beyondwords.typepad.com/beyond_words/ Kathy

    This is a fabulous post, Phil. How do you do it?

    I have missed roaming the blogosphere lately.

  • Karl Hallowell

    Well, I think most of the other dangers pale compared to nuclear war. IMHO, we’ll long be at the stage where a war could take out most of the human race – even after we become strongly established in space. The only valid solution in the long term is diversification at the galactic level.

    And what’s not properly addressed here is that the response to 9/11 was at least as damaging as the attack itself, and that human civilization is overcentralized. It is vulnerable to terrorist attack and that is what causes the overreaction.

    In the US, if a terrorist group acquires nukes and sets one off, then there really isn’t any good way to protect the infrastructure. A place like Manhattan, Silicon Valley, Houston, etc is too concentrated. That creates the climate for overreaction.

  • Engineer-Poet

    A civilization capable of building a Dyson sphere wouldn’t care much about the fate of a single planet; it probably would have used all the planets in its system for building materials.

    As I’ve speculated elsewhere, a GRB caused by the merger of two neutron stars would probably be predictable from gravitational radiation.  (You’d have some warning of a supernova from the neutrino burst associated with the core collapse, but it would be hours and not months or years.)  Now, I freely confess that I’m not as up on radiation physics as I could be, but consider that causing serious planetary damage with gamma rays requires a very large amount of energy to be deposited in the atmosphere.  Suppose that you stick a mass of material, perhaps only centimeters thick on average, some distance (tens of thousands to millions of miles) away from the planet in the direction of the source.  This material stops some of the gamma rays, but mostly it scatters them.  The photons deflected, most of the radiation goes somewhere other than the planet.

    For any statistical range of scattering angles, there is a distance where only an arbitrarily small fraction of the radiation still hits the planet.  Set that small enough to manage the atmospheric effects and get the positioning (timing of objects in various orbits!) of the shields just right, and the problem is solved.  Also, just covering the sky with scatterers won’t do; you also have to keep off-axis radiation from being scattered onto the planet.

    Learning enough about the physics of neutron-star mergers and supernovae to have that timing dead-nuts on is going to be a big part of the trick.

    Preventing nuclear attacks is more of an exercise in psychology than physics.  Sure, it matters NOW, but in the grand scheme of the universe it’s small potatoes. ;-)

  • Dave Schuler

    And the Earth being swallowed up by a black hole will be even less likely if they don’t exist.

  • Jim Strickland

    re: robot takeover

    Extraordinarily unlikely. Yes, it probably is possible to build AIs. Yes, it probably is possible to build human level ones, but why would anyone bother, save for proof of concept? For most applications a human level (or better) AI is far more intelligence than needed. I do not, for example, want a car with a human intelligence that will disagree about where we should go. I want a car with artificial dog intelligence – avoids running into things, goes home when the driver is too drunk to drive, and comes when called. A corollary to this is that any useful human level AI, in order to BE useful, will be subservient. And any dunce who builds robot slaves who aren’t happy and fulfilled as robot slaves (and therefore not inclined to change their status) deserves what he or she gets.

    Seriously though, the idea of robots taking over is ridiculous simply for the cost. Even with self replicating factories, it’s still easier, cheaper (and much more fun) to make more people. We, too are a self-replicating system, our raw materials are naturally occurring, and our designs and to some extent software have had a million years’ debugging.

    -Jim