My previous post on enhancement by way of subtraction as well as addition — and possible downsides that could result when people have the option to delete characteristics that we have traditionally that of as being core to human nature — has led to some interesting commentary from reader Jeff Allbright. Jeff writes:
My message is that there is nothing more important or central to “transhumanist” thought than achieving a effective framework for the promotion of an increasing context of hierachical, fine-grained, evolving values, promoted via methods increasingly effective, in principle, over increasing scope of interaction….
Likewise, while it’s increasingly meaningless to talk of modifications as inherently “good” or “bad”, it is increasingly meaningful, important, and urgent that we learn to effectively assess and evaluate actions, relative to our evolving values, rationally expected to promote those evolving values over increasing scope.
This puts me in mind of the discussion we had a couple years back about how Asimov’s Three Laws of Robotics might be updated to make them useful as goals or design principles for those looking to establish an ethical framework for artificial intelligence. Restated as goals, the three laws become:
1. Ensure the survival of life and intelligence.
2. Maximize the happiness, freedom, and well-being of individual sentient beings.
3. Ensure the safety of individual sentient beings.
Astute readers will note that I flipped the order of the second and third goals from my original formulation, which reflected Asimov’s order. Last time out, several readers protested that valuing safety over happiness and freedom ultimately creates a nanny state (or worse) in which freedom gets snuffed out altogether. I’m still not convinced that would necessarily be the case, but I do see the risk.
Anyway, as noted last time out, these goals work as well for human beings as robots, and something like this list might serve as the core for the “context of values” that Jeff calls for, above. Interestingly, I don’t think there is anything particularly “new” on the list. As with the Declaration of Singularity, I think we can get a pretty good idea about our future values by looking at the values we have in the present, many of which we have carried with us for a long time.