Posthuman Rules

By | March 7, 2010

On last week’s podcast, we touched briefly on the subject of what kind of government / security infrastructure will need to be implemented in a world in which anyone can make, well, anything. I suggested that some kind of powerful enforcement mechanism will have to be put in place, although much of the “policing” might be built into the system itself, and that ultimately we will look for artificial intelligence to perform this particular government function (along with all other government functions.) In the comments section of the show post, DCWhatthe proposes that a “wisdom of crowds” approach might be sufficient.

Michael Anissimov posts an interesting essay responding to criticism of transhumanist thought in which he takes the argument about the need for security to the next level:

The “how” question is where things can get sticky. Most of human existence is not so crime-free and kosher as life in the United States or Western Europe. Business as usual in many places in the world, including the country of my grandparents, Russia, is deeply defined by organized crime, physical intimidation, and other primate antics. The many wealthy, comfortable transhumanists living in San Francisco, Los Angeles, Austin, Florida, Boston, New York, London, and similar places tend to forget this. The truth is that most of the world is dominated by the radically evil. Increasing our technological capabilities will only magnify that evil many times over.

The answer to this problem lies not in letting every being do whatever they want, which would lead to chaos. There must be regulations and restrictions on enhancement, to coax it along socially beneficial guidelines. This is not the same as advocating socialist politics in the human world. You can be a radical libertarian when it comes to human societies, but advocate “stringent” top-level regulation for a transhumanist world. The reason why is that the space of possibilities opened up by unlimited self-modification of brains and bodies is absolutely huge. Most of these configurations lack value, by any possible definition, even definitions adopted specifically as contrarian positions to try and refute my hypothesis. This space is much larger than we can imagine, and larger than many naive transhumanists choose to imagine. This is especially relevant when it comes to matters of mind, not just the body. Evolution crafted our minds over millions of years to be sane. More than 999,999 out of every 1,000,000 possible modifications to the human mind would be more likely to lead to insanity than improved intelligence or happiness. Transhumanists who don’t understand this need to study the human mind and looming technological possibilities more closely. The human mind is precisely configured, the space of choice is not, and ignorant spontaneous choices will lead to insane outcomes.

I think the problem with taking a “wisdom of crowds” — or any organic, ground-up approach — to addressing these risks is that the downside is so great. We only have to be wrong once and it’s game over. On the other hand, we can’t let the risks inhibit all forward movement. If the risk-averse don’t take steps to get us to a secure replicator-driven economy, or posthuman future, the risk-non-averse will very likely get us to a dangerous (to say the least) version of each of those.

  • Sally Morem

    Excellent post. It has the makings of a very important show.

    We’re in a bind. We KNOW accelerating technology will continue, we simply don’t know the precise nature of it. But, we have no choice. We can’t relinguish. The logic of the tech will DEMAND that some group or groups will stumble upon its secrets.

    We KNOW (as Anissimov pointed out) that there are huge numbers of tech configurations that could lead to disastrous ends, but we also know there are immense numbers that would lead to a world of wonder.

    How do we pick our way through the minefields we know will be ahead of us?

  • Sally Morem

    I haven’t had time to really think through Anissimov’s superb essay. But I did skim through the comments written in response to his, and wrote a few comments in the form of questions in response to theirs:

    As we approach the Singularity, will Robert Wright’s notion of a non-zero sum society make much more sense to more and more people? Will we stop thinking in terms of “redistribution of wealth?” when so many can “grow their own”? Will our notions of what “society” is change radically? Will the Singularity make current libertarian utopian dreams seem very pedestrian by comparison? Will protective systems emerge and stabilize us as we gain more experience with technologies that are accelerating…faster faster faster? Are we Singulatarians and Transhumanists wild-eyed romantics or aren’t we wild-eyed enough?

    Just some questions for everyone to gnaw on.

  • Sally Morem

    I’m watching a video of Anissimov’s talk given at Foresight 2010. He made an interesting analogy to the great changes we are facing in AI. He listed a number of game-changing technologies developed in the 20th century and asked listeners to consider how it might have been like if all of these had been introduced in ONE year.

    Which triggered this thought: The Singularity. What would it be like if every technology developed in the agricultural revolution through the industrial revolution were presented to us in ONE year? Isn’t that that kind of analogy that gives you a better sense of how mind-blowing the Singularity may wind up being?

  • http://wheretheresawilliam.blogspot.com Will Brown

    How do we pick our way through the minefields we know will be ahead of us?

    Wrong paradigm, Sally, except as a mechanism to guide us as individuals making unique decisions relevant to our personal circumstance of the moment.

    Better, I think, to provide encouragement to broad populations to choose societally acceptable/desirable decisions that numerous individuals seem likely to confront using/developing a given technology over the course of their lifetimes. By making good decision-making a personally profitable (or at least valuable in some substantial – and therefore tradeable – fashion), you create a generalized and self-reinforcing environment that works to discourage bad decisions without actively doing so.

    Further to this, I think we all have to just accept that any advancement on a given technology (or human capability more generally) entails a certain amount of unpreventable risk. It’s not as if we haven’t previously confronted this identical class of challenge over the course of human history. Other than the admittedly unknowable type or degree of enhanced capability we will also develop to respond/compensate/correct undesirable events as a result of the same technology development process, I do believe seeking to impose pro-active mechanisms to pre-empt this potential danger creates a risk of the “cure” becoming worse than the “disease”.

    Here in the US, the “war on drugs” has given rise to enforcement practices and conditions of treatment even Nixon seems unlikely to have approved of during the inception of the policy. How much human damage do you imagine might potentially result from an equally mis-guided effort to legislate our way to a “safe” technology revolution? A good strategy is one that other’s easily recognise as applying beneficially to themselves, too. Thus are allies inspired to create themselves and “good generals” made. :)

    What say we all work toward “good generals” status and not “better Nixons”, eh?

  • Anonymous

    Donald Norman’s “The design of Everyday things” probably coined the terms ‘natural constraints’ and ‘affordance’ to describe what can and can’t be done with an object or a system. He described a baby-changing table that could only be lowered after the restroom door was closed, then the table prevented the door from being opened until it was return to not-in-use position. It’s a very smart design because nobody needs to enforce the rules, compliance with desired behavior is inevitable. This kind of design is important to the development of future products and software. If the mechanisms which allow production of “anything” via nanofabbing are inherently tied to an exchange rate then it is less likely to be rampantly uncontrolled. ex: “You currently have X kg of materials registered on your account, would you like to exchange existing materials or license an increase to the degree you are able to afford?” If I had on-demand stuff and only paid for how much stuff was kept around, I think we’d all have much less clutter in our space.

  • DCWhatthe

    We aren’t smart enough to know how to prevent catastrophies, without inadvertently depriving ourselves of the miracles, or causing other problems. Will Brown is correct, on that point.

    As far as crowd wisdom goes, that’s only a possibility.

    Our future, uber-intelligent selves, should have a better handle on what works than we do. If we were able to create backups of us before we proceed and bury it somewhere, then of course we can determine empirically some of the things that work.

    Again, the best we can do, is share our sense of optimism, and keep moving forward, experimenting and learning more. It is possible that the human species won’t make it, no matter what we do.

  • Orion

    Control.

    You’d have to have a supercomputer that was literally built into the atomic structure of every atom in the system and capable of projecting out the effects of a course of action, compare this to a ruleset, and prevent the inhabitants from ever taking that action. The ruleset would define how close the inhabitants to take themselves to the brink, ranging from not allowing them to bang the rocks together in the first place and create civilization to giving everyone in range a reasonable warning to start running before Pat mixes the contents of Beaker A and Beaker B.

  • http://wheretheresawilliam.blogspot.com Will Brown

    Sally also said:

    Which triggered this thought: The Singularity. What would it be like if every technology developed in the agricultural revolution through the industrial revolution were presented to us in ONE year? Isn’t that that kind of analogy that gives you a better sense of how mind-blowing the Singularity may wind up being?

    It is and is also my understanding of the projected rate of progress. What you (and apparently Michael A) didn’t also include is that our ability to obtain, understand and make use of this accelerating flood of knowledge and capability will concommitantly accelerate as well. This changes the context of the analogy in ways small and consequential while shifting the emphasis away from concern and toward process – how do we go about the process of making reality of all this?

    None of which reduces Sally’s point regarding the substance of what’s involved when one says “The Singularity”. My belief is that there exists a sufficiency of human inertia and societal resistance to abandoning existing arrangements that any need for an inhibative mechanism on development efforts is best reserved for only the most extreme (and therefore unlikely) of circumstance – quite deliberately synonymous with the existing WMD prohibitions model. Any restriction on individual development of local application(s) of new technology would most likely result in the however unintended criminalisation of the entire genre of technology (nano, fab labs, human enhancement, etc). There exists, I submit, a sufficiency of history on the result of artificial prohibition on human behavior, I think.

    Personally, I suspect that our greatest problem getting to the (or even “a”) Singularity will be convincing each other that this agreed upon “good thing” actually is do-able and preferrable to the status quo.