2. The Question of Hubris
Last time, I asked whether those who are looking for the soft-takeoff version of the Singularity should focus on trying to instill a notion of goodness, in particular the idea of an ultimate good, into the conceptual framework of the emerging intelligences. Irrespective of whether it would be a good idea to try to do so, I don’t think we can make machines that “believe in God” in a meaningful sense. But a notion of the good, a good that is transcendent, a good that should always be strived for — could we make such a notion axiomatic for an emerging intelligence?
From a strictly practical standpoint, an AI hard-coded with a combination of the golden rule and Kant’s categorical imperative would be about as unlikely to go hard-takeoff on us as any being that can be imagined — assuming, of course, that it considers us to be among the “others” unto which it must reciprocally “do,” and that it doesn’t immediately begin formulating Universal Ethical Precepts that involve removing all the “organic infestation” from the planet. Failing a hard-code option, we can attempt to communicate these ideas to the new intelligence. If the ethical cure doesn’t take, getting the AI tangled up reading Kant might at least buy us a little time. Although with the AI’s million-to-one mental speed advantage, the operative word there is “little.”
However, instilling the emerging intelligence with a beneficial ethical sense is not the only moral consideration that we have to look at when exploring the relationship between God and the Singularity. The over-arching issue is the moral character of the Singularity itself. Is the Singularity a moral event?