Monthly Archives: March 2010

Fast Forward Radio — Utopias and Dystopias

Phil and Stephen discuss utopias and dystopias — images of the world gone completely right, or horribly wrong. We find these ideas in fiction and in political discourse, among other places. Are they helpful? What can they really tell us about the future?

FFRNewLogo9J.jpg


Longer Living through Plastics

Wednesday evening I had a chat with a fellow futurist who told me about some exciting work that’s being done in cryonics. A new approach to the problem of preserving the human brain is being developed that does not rely on cold storage — which up to now has been the standard approach and is, as far as I know, the only form of suspension that anyone is currently using. The new approach relies on encasing the brain in plastic. And by that I don’t mean putting a plastic shell around the brain, but rather infusing all the brain tissue with a resin which will harden and perfectly preserve the brain’s cellular and neuron structure.

My friend explained that this approach will be a major game-changer because it won’t require anything like the infrastructure and investment involved in cryonic freeze. Plastination, he explained, will cost less than a casket burial. The economic argument is gone. The “yuck” factor may hang in as an objection, but this plastic approach has the advantage of being the second major model proposed. It will never be as shocking as the original “corpsicle” (and subsequent “headsicle”) ideas.

My friend also told me that he and a colleague are working on promoting this approach via multiple channels, including raising funding for financial incentives for researchers achieving defined goals towards full brain preservation. (I don’t recall that he actually used the phrase “push-prize,” but it sounded an awful lot like that.) He pointed out that the practice might be widely adopted even by those who for religious or other reasons aren’t interested in being revived. For example, memories retrieved from a preserved brain in “offline mode,” — meaning that there is no attempt made to restore the conscious brain function — might be of great value to family members, future historians, etc.

I’m not identifying this futurist because he told me that he’s not yet ready to go public with this effort, although I’m looking forward to having him on FastForward radio as soon as he is ready. Anyhow, I found it pretty interesting on Thursday, having had this conversation the previous night, to see this same idea being kicked around on Fight Aging!, Accelerating Future, and InstaPundit.

The discussion ultimately centers around this site, which argues for both the plastination method and a push prize. The site is owned and run by Kenneth Hayworth. I can’t say whether there is any connection between Hayworth and my friend. The best case would be no connection — meaning that there are several different groups and individuals working on this goal simultaneously.

Michael quotes a key piece from Hayworth’s site, which I will repeat here:

From a medical and technical standpoint all that is needed is the development of a surgical procedure for perfusing a patient’s circulatory system with a series of fixatives and plastic resins capable of perfectly preserving their brain’s neural circuitry in a plasticized block for long-term storage. Such a procedure would, in effect, put the patient into a long dreamless sleep where they can wait out the decades or centuries necessary for the development of the more advanced technology required to revive them.

How could a patient ever be awoken from such an unconventional sleep? The necessary technology exists in primitive form today — the plasticized brain block will be automatically sliced into thin sections and these scanned in an electron microscope at nanometer resolution. Such scanning can map out the exact synaptic connectivity among neurons while simultaneously providing information on a host of molecular-level constituents. This map of brain connectivity will then be uploaded into a computer emulation controlling a robotic body — the patient awakes to a new dawn of unlimited potential.

I think this approach, once perfected, could well be the technology that pushes cryonics more or less into the mainstream. Hayworth foresees a future in which uploading human personality from a carefully preserved brain is viewed roughly the way laser eye surgery is today. I think that’s about right, although the stakes are clearly higher with uploading.

Hayworth makes a passionate case that we need to overcome backward philosophical ideas in order to enable such technology in the near future. Michael reiterates that case. Both take a dim view of religion, seeing it as a primary culprit in blocking progress in this kind of research. I’ll deal with that issue separately somewhere down the road, but for now I’ll just state that I don’t think there is any real conflict between religious belief and brain preservation, any more than there’s a conflict between religious belief and this technology.

However, there is a philosophical discussion in the comments on Michael’s post which I think is quite interesting.

The debate comes down to this question: if you store my brain in plastic for a couple of centuries, then slice it up to create an uploaded virtual replica, then fire up the virtual replica…have you brought me back to life? My answer to that, assuming that everything works, is a qualified “yes.” (It’s a major qualification, though.) The replica will have my personality, my memories, and — from his standpoint — a continuous experience of being Phil Bowermaster, with this one interruption–which may be no more significant to him than a single night’s sleep. From his standpoint, and from the standpoint of the outside world, I have been brought back to life.

I can even go so far as to say that from MY standpoint, as the replica, I have been brought back to life.

In fact, there is only one standpoint from which anything looks amiss. And that, of course, would be my other standpoint, the standpoint of the original Phil Bowermaster. That Phil Bowermaster, it would seem to me, gets left behind in those discarded slices of plastinated brain. So even though the replica is me as far as he is concerned and as far as the world is concerned, in an important sense — from the point of view of the original — I am not there.

Hayworth argues quite eloquently that this sense of something being amiss is based on an illusion. I find the argument compelling but less than completely convincing or satisfying. Michael points out that consciousness is not continuous, anyway, that it is interrupted daily by sleep and can be more severely messed with by things like head trauma and coma. However, I’m not concerned with continuity of consciousness. My concern is continuity of substrate.

I prefer a digitization scheme in which the old substrate functions concurrently with, and is slowly replaced by, the new one. That is to say, I need to consciously experience moving from my brain to the computer in order to accept that I have in fact made the move. Michael and Hayworth would argue that this is illusory thinking and bad philosophy. I would counter that this is merely being careful.

I say rather than slicing up my dead brain and reading it straight into digital form, I’d like to hang in until nanotechnology actually enables deplastinating and reviving my brain in a nice new cloned or robotic body. From there, I’d be happy living in non-uploaded form for a brief time until a conscious, gradual upload can be arranged. In IT terms, we’re talking about warm standby rather than cold standby. It might be more difficult and more expensive, but having waited decades or centuries, I’m okay with taking a few extra steps to make sure that my survival is actually my survival.

Hayworth presents a mind-uploading bill of rights which reads in part:

Revival rights — The revival wishes of the individual undergoing brain preservation should be respected. This includes the right to refuse revival under a list of circumstances provided by the individual before preservation.

Bingo. My circumstances would include, among other things, the requirement that my suspended brain first be revived and that I be uploaded via a warm standby approach. Call me old-fashioned, but when I get brought back to life, I want to be there to see it.

The Age of Medical Nanobots is Approaching

The vision: medical nanobots racing through your bloodstream to the site of bacterial or viral infection… or cancer… or even injury. Once at the site of the dysfunction, the nanobot dumps its medicine cargo. The effectiveness of the drug is increased because its delivered exactly where its needed. Side effects are cut because the medicine doesn’t go where its not needed.

This great sci-fi concept is becoming reality:

RNAi, also known as “gene silencing,” is a cellular mechanism that blocks the production of proteins, and has tantalized doctors as a potential medicine for a number of years now. However, by placing payloads of RNA in a polymer nanobot, scientists have finally shown that this technique can work against tumors in human patients.

Specially constructed molecules could potentially block the expression of genes critical to the reproduction of viruses and the spread of cancer. But until now, doctors had been unable to direct those molecules to the right cellular nuclei. Scientists from the California Institute of Technology solved this problem by placing the RNA molecules in a specialized polymer robot with a chemical sensor. When the environment of a cancerous cell triggered the chemical sensor, the robot releases the RNA.

The trial involved three people with melanomas who received the RNA-load nanoparticles intravenously four times, for 30 minutes, over three weeks. At the end of that time, samples taken from the melanomas showed both the presence of the RNA, and a reduction in tumor gene expression.

Why Do They Have to Develop?

Here’s a charmer of a quote from the comments section of the article I linked the other day about how a new catalyst enables highly efficient production of hydrogen from water:

Ok why do under developed nations even need power honestly? Can’t they just stay under-developed forever?

I’d like to think this is a joke. Unfortunately, even if it is a joke, it reflects a belief that is held in all seriousness by far too many people: to wit, that there is a case to be made for depriving less developed societies of economic and technological development. The argument begins with the assumption that such development is inherently harmful to the planet. We can’t even afford for the developed world to continue to be developed, the thinking goes. We certainly don’t need any more societies joining our matricidal ranks, toxifying the planet, contributing to mass extinctions, and paving the way for some final, cataclysmic end.

A supporting set of assumptions derive from a highly romanticized view of primitive cultures. Some 18th-century romantic primitivists touted the idea of the Noble Savage, which held that people living in a “state of nature” are not only happier than, but morally superior to, their civilized brethren. And this idea is with us even today. While the phrase “Noble Savage” doesn’t get too much play these days, there are apparently no shortage of individuals who do not doubt for a second the veracity of the scenes depicted on their souvenir Avatar beverage cups from Burger King.

This Noble Savage argument is a sop to the first argument. Since we know that economic and technological development represent nothing but bad news for the planet, and since we know that primitive peoples are healthier, happier, more attractive, and nicer than we are, it woud be wrong even to think about subjecting primitive people to our way of life — even if they think they want it. After all, we know better than they do — they’re a bunch of primitives! (Conveniently, the certitude that they are wiser than we are extends to virtually every subject except this one.)

All right, so let’s deal with these arguments.

1. Economic and technological development cause massive damage to the planet and their proliferation will only cause more damage.

Well, yes and no. There is no question that, historically, human success has come at the expense of many other members of the ecosystem. We’ve done a lot of damage. But that isn’t the whole story. Dirty technologies have enbabled the development of cleaner technologies. Unsustainable practices have set the stage for sustainable ones. In a very real sense, it is human success which has empowered the environmental movement.For the first time in the history of the planet, members of one species are taking steps to prevent the extinction of other species, looking for ways to mitigate and repair damage to the environment, and even talking about one day bringing other species back from extinction.

These astounding trends are the result of economic and technological development. Non-developed cultures may “live in harmony” with nature, but they don’t attempt any of this proactive stuff.

2. Primitive cultures are better off staying primitive.

We’ll leave the assumed moral superiority of primitive cutlures alone. I don’t believe that it is a given (far from it), but let’s take it as a given that primitive cultures are as nice as (or maybe even a liitle nicer than) developed ones. The part of the argument I want to deal with is the part that says that the material well being of people who live in such cultures is as good as or better than what we enjoy.

Anyone who truly believes this to be the case ought to put on a loincloth and move into a grass hut on a riverbank somewhere. Live the rest of your life — or even a few months — without the benefits of modern food production, sanitation, health care, shelter, clothing, communications, and entertainnment…and then come back and tell the rest of us how much better it is. If you really believe it is better, good for you. Back to the hut with you, and thanks for doing your part to help fix the planet.

But if you don’t think it’s better, and in fact you find such a life to be harsh beyond description and not something you want to endure yourself, then please refrain from glibly subjecting other people to it.

Fair?

Posthuman Rules

On last week’s podcast, we touched briefly on the subject of what kind of government / security infrastructure will need to be implemented in a world in which anyone can make, well, anything. I suggested that some kind of powerful enforcement mechanism will have to be put in place, although much of the “policing” might be built into the system itself, and that ultimately we will look for artificial intelligence to perform this particular government function (along with all other government functions.) In the comments section of the show post, DCWhatthe proposes that a “wisdom of crowds” approach might be sufficient.

Michael Anissimov posts an interesting essay responding to criticism of transhumanist thought in which he takes the argument about the need for security to the next level:

The “how” question is where things can get sticky. Most of human existence is not so crime-free and kosher as life in the United States or Western Europe. Business as usual in many places in the world, including the country of my grandparents, Russia, is deeply defined by organized crime, physical intimidation, and other primate antics. The many wealthy, comfortable transhumanists living in San Francisco, Los Angeles, Austin, Florida, Boston, New York, London, and similar places tend to forget this. The truth is that most of the world is dominated by the radically evil. Increasing our technological capabilities will only magnify that evil many times over.

The answer to this problem lies not in letting every being do whatever they want, which would lead to chaos. There must be regulations and restrictions on enhancement, to coax it along socially beneficial guidelines. This is not the same as advocating socialist politics in the human world. You can be a radical libertarian when it comes to human societies, but advocate “stringent” top-level regulation for a transhumanist world. The reason why is that the space of possibilities opened up by unlimited self-modification of brains and bodies is absolutely huge. Most of these configurations lack value, by any possible definition, even definitions adopted specifically as contrarian positions to try and refute my hypothesis. This space is much larger than we can imagine, and larger than many naive transhumanists choose to imagine. This is especially relevant when it comes to matters of mind, not just the body. Evolution crafted our minds over millions of years to be sane. More than 999,999 out of every 1,000,000 possible modifications to the human mind would be more likely to lead to insanity than improved intelligence or happiness. Transhumanists who don’t understand this need to study the human mind and looming technological possibilities more closely. The human mind is precisely configured, the space of choice is not, and ignorant spontaneous choices will lead to insane outcomes.

I think the problem with taking a “wisdom of crowds” — or any organic, ground-up approach — to addressing these risks is that the downside is so great. We only have to be wrong once and it’s game over. On the other hand, we can’t let the risks inhibit all forward movement. If the risk-averse don’t take steps to get us to a secure replicator-driven economy, or posthuman future, the risk-non-averse will very likely get us to a dangerous (to say the least) version of each of those.