Future Values

By | July 18, 2009

My previous post on enhancement by way of subtraction as well as addition — and possible downsides that could result when people have the option to delete characteristics that we have traditionally that of as being core to human nature — has led to some interesting commentary from reader Jeff Allbright. Jeff writes:

My message is that there is nothing more important or central to “transhumanist” thought than achieving a effective framework for the promotion of an increasing context of hierachical, fine-grained, evolving values, promoted via methods increasingly effective, in principle, over increasing scope of interaction….

Likewise, while it’s increasingly meaningless to talk of modifications as inherently “good” or “bad”, it is increasingly meaningful, important, and urgent that we learn to effectively assess and evaluate actions, relative to our evolving values, rationally expected to promote those evolving values over increasing scope.

This puts me in mind of the discussion we had a couple years back about how Asimov’s Three Laws of Robotics might be updated to make them useful as goals or design principles for those looking to establish an ethical framework for artificial intelligence. Restated as goals, the three laws become:

1. Ensure the survival of life and intelligence.

2. Maximize the happiness, freedom, and well-being of individual sentient beings.

3. Ensure the safety of individual sentient beings.

Astute readers will note that I flipped the order of the second and third goals from my original formulation, which reflected Asimov’s order. Last time out, several readers protested that valuing safety over happiness and freedom ultimately creates a nanny state (or worse) in which freedom gets snuffed out altogether. I’m still not convinced that would necessarily be the case, but I do see the risk.

Anyway, as noted last time out, these goals work as well for human beings as robots, and something like this list might serve as the core for the “context of values” that Jeff calls for, above. Interestingly, I don’t think there is anything particularly “new” on the list. As with the Declaration of Singularity, I think we can get a pretty good idea about our future values by looking at the values we have in the present, many of which we have carried with us for a long time.

  • http://eadwacer42.livejournal.com Eadwacer

    The problem I have with both addition and subtraction is what might be called ‘hubris’. I am all for enhancements and de-unenhancements, but there’s this problem, best summed up by the old phrase – ‘you can’t do just one thing’. Do we know what the side-effects (AKA the effects) of changing gene X are? Will we spend the time to find out before assuming that we know the important results and pressing on? The classic case, from nature, is where improving resistence to malaria causes increased susceptibility to sickle-cell disease. I’d worry that we don’t know yet what all the side effects are, and that we will jump into the add/subtract game without a clear understanding of them — and that there is no ethical way of finding out short of experimenting on the next generation.

  • Anonymous

    The next generation will experiment on themselves. Consider steroid use in athletes. Everyone knows the negative consequences are unacceptable in the long term but there are still people who feel they can ‘win’ in the short term.
    In the case of voluntary sociopathy perhaps the procedure(s) involved are not permanent? Suppose I can wake up M-F and put “insentive sociopath” in my coffee to be more productive at work then on the weekends have a nice big breakfast of “compassionate altruism” – am I a bad person? Is this so much different than those who live in metro apartments during the week and return to their suburban families on weekends? How/why?