On last week’s podcast, we touched briefly on the subject of what kind of government / security infrastructure will need to be implemented in a world in which anyone can make, well, anything. I suggested that some kind of powerful enforcement mechanism will have to be put in place, although much of the “policing” might be built into the system itself, and that ultimately we will look for artificial intelligence to perform this particular government function (along with all other government functions.) In the comments section of the show post, DCWhatthe proposes that a “wisdom of crowds” approach might be sufficient.
Michael Anissimov posts an interesting essay responding to criticism of transhumanist thought in which he takes the argument about the need for security to the next level:
The “how†question is where things can get sticky. Most of human existence is not so crime-free and kosher as life in the United States or Western Europe. Business as usual in many places in the world, including the country of my grandparents, Russia, is deeply defined by organized crime, physical intimidation, and other primate antics. The many wealthy, comfortable transhumanists living in San Francisco, Los Angeles, Austin, Florida, Boston, New York, London, and similar places tend to forget this. The truth is that most of the world is dominated by the radically evil. Increasing our technological capabilities will only magnify that evil many times over.
The answer to this problem lies not in letting every being do whatever they want, which would lead to chaos. There must be regulations and restrictions on enhancement, to coax it along socially beneficial guidelines. This is not the same as advocating socialist politics in the human world. You can be a radical libertarian when it comes to human societies, but advocate “stringent†top-level regulation for a transhumanist world. The reason why is that the space of possibilities opened up by unlimited self-modification of brains and bodies is absolutely huge. Most of these configurations lack value, by any possible definition, even definitions adopted specifically as contrarian positions to try and refute my hypothesis. This space is much larger than we can imagine, and larger than many naive transhumanists choose to imagine. This is especially relevant when it comes to matters of mind, not just the body. Evolution crafted our minds over millions of years to be sane. More than 999,999 out of every 1,000,000 possible modifications to the human mind would be more likely to lead to insanity than improved intelligence or happiness. Transhumanists who don’t understand this need to study the human mind and looming technological possibilities more closely. The human mind is precisely configured, the space of choice is not, and ignorant spontaneous choices will lead to insane outcomes.
I think the problem with taking a “wisdom of crowds” — or any organic, ground-up approach — to addressing these risks is that the downside is so great. We only have to be wrong once and it’s game over. On the other hand, we can’t let the risks inhibit all forward movement. If the risk-averse don’t take steps to get us to a secure replicator-driven economy, or posthuman future, the risk-non-averse will very likely get us to a dangerous (to say the least) version of each of those.