Speaking of Cosmology

By | February 7, 2007

I wonder what the relationship is between Gardner’s Intelligent Universe and Nick Bostrom’s Simulation Argument.

A quick recap of the latter: Bostrom argues that one of the following three propositions is most likely true:

(1) the human species is very likely to go extinct before reaching a “posthuman” stage;

(2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof);

(3) we are almost certainly living in a computer simulation.

The full argument is lots more fun than any abbreviated version could possibly be, so don’t cheat yourself. At first blush, it looks to me as though a simulated universe would fit pretty well with Gardner’s idea of a carefully designed intelligent universe. If our universe is a simulation, it explains why so much of what we observe appears to be “rigged” in favor of life in general and us in particular.

But then, why were things that way in the original universe, which we’re now simulating? Was it a simulation, too? Or maybe things weren’t that way at all in our ancestor universe. Maybe this universe is some kind of bizarre jazz riff on the original. In which case, why does it seem so…mundane? You would think that a wholly (or largely) original universe ought to have some magical stuff in it, or else what’s the point?

On the other hand, of course it would all seem mundane to us. Maybe light or gravity are wonderfully exotic concepts, notions that would amaze and delight the inhabitants of the original universe. And perhaps their everyday reality would delight and amaze us. We’ll probably never know.

But then again…

UPDATE:

I followed my own link to begin re-reading Bostrom’s essay when something caught my attention that I had skimmed over before:

Simulating even a single posthuman civilization might be prohibitively expensive. If so, then we should expect our simulation to be terminated when we are about to become posthuman.

Criminy, now there’s a thought. Is anybody at the Lifeboat Foundation working on that possibility?

UPDATE FROM STEPHEN:

So, Bostrom would have us choose one of the three…

Well, I definitely don’t buy #2. It seems very likely that any post-Singularity civ would run many, many simulations of its past (or even very weird variations on reality). Alternate history is already an important genre of fiction in this reality.

So I guess I’m left on the fence between #1 and #3. I don’t really like either – particularly if Bostrum is right about our simulation being terminated at the point of Singularity. It’s death by funnel (#1) or reboot (#3). It’s the Fermi Paradox meeting The Doomsday Argument in The Matrix.

Anyway, let’s throw out #2 because it is implausible and let’s discount #1 because…well I’d rather not believe it. Call it hope. So, I’m left with #3 and Bostrom’s rather depressing thought that a post-Singularity civilization might terminate our simulation at the moment of Singularity.

But wait. I think Bostrum is thinking pre-Singularity. Why would a post-Singularity civilization that’s running a simulation terminate a civilization at the point of Singularity? Lack of computing power? Naahhh.

Why not bring the simulated minds into their world at that point? In fact, wouldn’t that be an efficient way to keep the exponential progress of a Singularity going? A post-Singularity civilization could literally bring more and more post-Singularity minds into the “real” world via full simulations of other realities up to the point of their Singularity.

This would answer the Fermi Paradox and The Doomsday Argument and the puzzling Anthropic Principle. Of course this universe was built to favor life. The civilization that’s simulating our universe wants more post-Singularity minds.

Why not just make minds in their reality rather than go to all the trouble of simulating a universe? Perhaps because there is some advantage to the diversity of minds that might develop in different realities.

Just a thought.

  • Karl Hallowell

    I think a triage system ought to be put in place for this sort of thing. Ie, do something only if you can answer “yes” to the following questions: 1) Can you do anything about the perceived problem? and 2) Is it worth the effort to do anything about the perceived problem?

    I think the answer to 1) would be “no”, unless there’s some way to whine to the admins about getting more resources.

  • Phil Bowermaster

    Are you kidding? Those IT guys hardly ever call back, and when they do it’s usually just to give attitude. Maybe it’s time for this computer simulation to figure out a way to optimize itself. Of course, we would probably have to make the leap to resource-intensive posthuman intelligence in order to figure that out.

    It’s always something.

  • D. Vision

    Well as fascinating as it might be to consider such questions, have done significant work on computer simulation in my own time, this all boils down to philosophical arguments, not scientific arguments. A good simulation of a system–any system, but particularly a computer system–makes it impossible for those entities (be they programs or sentient beings) to _detect_ that they actually _are_ in a simulation, as opposed to the real thing.

    This leaves me with a particularly bad taste of unfalsifiability about the whole idea….there is nothing that we can do, see, or measure that would indicate that we are in a simulation, since by the very nature of a simulation, we cannot detect whether it’s a simulation or the “real thing”.

    It’s utterly immaterial to the real questions that humanity phases about philosophy and science. Our universe being “rigged” could be the result of the anthropic principle, which is unfalsifiable, so why argue against it? Why don’t we simply continue studying what we are in and figure out how it works?

  • Phil Bowermaster

    D.,

    Not sure I follow you. If by “simulation” we mean “exact reproduction,” then it would be hard — perhaps impossible — for simulated entities to know that they’re in a simulation. But if I were going to create a simulated universe, I might toy around with things a little. I’d plant clues in it that it was a simulated universe — maybe provide an interface for communicating between my level of reality and the new one (once they had figured out the trick.)

    Or picture this — a guy living in a simulated universe misses all the clues that were built into his universe that it’s simulated, but then goes on to model a simulation of his own. It’s an exact reproduction — no clues. But it’s so exact that the beings in the new universe pick up the clues and figure out that they’re living in a simulation. They get the message once removed!

    Falsifiability? Relax, we’re just having fun, here. No one’s trying to fund any research or change any school curricula. On the other hand, I suppose one could argue that Bostrom’s argument is built on highly suspect premises: generalizing about what “civilizations” do, when we really have only one sample set and no good reason to believe that there are, have been, or will be any others.