The God and The Singularity post has generated some interesting discussion in the comments. I would like to address a few of the points raised by a shift-key-challenged reader named eisendorn. His issues provide a good opportunity for clarification and amplification on some of what I wrote in the initial entry.
Eisendorn writes:
i view the topic of this discussion with suspicion, and i see a fundamental problem in your reasoning, namely that you seem to automatically equate moral goodness with christian values and belief systems.
Actually, it’s more the other way around. I assert that Christian values and belief systems are predicated on an idea of goodness. I don’t think that a belief or value is good because it’s Christian. I think that if a belief or value is Christian (or Jewish, or Muslim, or for that matter Hindu or Buddhist — where the idea of “God” might be very different from what’s found in those first three, or altogether absent in the case of many Buddhists) then it is an attempt to reflect or to conform with an ultimate good.
Eisendorn continues:
while this may seem to be sound reasoning from a US citizen’s point of view, it certainly doesn’t seem all that appropriate thinking on an more global scale. after all, a solid part of the world population will not agree with that assumption to start with.
So making an association between Christian thought and some notion of the good reflects a narrow, US-centric worldview? This will no doubt come as a shock to all those Catholics in places like Poland or Mexico. Who would have guessed that they’re all just pawns of the American agenda?
Here are a couple of quotes which I think are relevant to the discussion at hand. They come from one of the great leaders of the 20th century:
Infinite striving to be the best is man’s duty; it is its own reward. Everything else is in God’s hands.
I saw that nations like individuals could only be made through the agony of the Cross and in no other way. Joy comes not out of infliction of pain on others but out of pain voluntarily borne by oneself.
Sheesh, why couldn’t this guy keep his bigoted, fundamentalist, US-authored opinions to himself? Oh, that’s right. He wasn’t remotely American and he wasn’t a Christian. In fact, he had a pretty low opinion of many (if not most) Christians. The idea that there is an ultimate good, that this good emanates from God, and that Christianity attempts to manifest or reflect this good through its beliefs and practices is not a strictly American, nor indeed even Christian, idea.
Eisendorn continues:
futhermore, i think you will have quite a hard time instilling the notion of an all-powerful, all-mercyful god into a piece of software (especially one written in java as singinst suggests
), and without it, christian ethics necessarily crumble. it may well be that there is a common denominator in christian morality and a supposed “perfect” ethic system for a benevolent ai; however, a humanitarian approach to safe ai design would certainly offer better possibilities (without being chained by a belief system), and this is a different vector of thought entirely.
I’m certainly not suggesting that we should try to program a computer to “believe” in God or to subscribe to any set of religious beliefs. I’m from the school that says these things are only worthwhile if one comes to them freely and of one’s own (or God’s) volition. But I think it would be equally problematic to attempt to introduce a purely relativistic moral scheme from whole cloth. Even Asimov’s Three Laws represent some kind of moral postulates to proceed from. What would this alternative “humanitarian approach” entail? Telling the AI that while we think it’s bad to kill people, and we hope that it comes to the same conclusion, we recognize that it would be judgmental of us to insist that killing people is wrong in any absolute sense? From a strictly practical standpoint, that leaves more wiggle room than I’m comfortable with.
Moreover, I would humbly suggest that anyone who pits “Christian” and “humanitarian” as stark alternatives, exclusive of each other, is proceeding from at least as strong a set of biases as he has inferred from reading my blog entry.
Here’s why I should never bother attempting humor:
i also have a remark about the debate itself, namely that it is obvious where the participants hail from. in hardly any other (western) country would you find anyone being afraid of getting into trouble at a church camp or having struggles with their mothers over their, well, wider points of view.
Yes, it is unbelievably repressive here in the US. If my mother reads any of this, she will probably have my face removed from all the family photos. Then the Thought Police will come pounding on my door. It’s a shame, but it’s how things are. You’ve got us there.
this, paired with the earlier notion of you guys suggesting a soft takeoff to best happen within the US gives me the creeps, to be honest. do you really consider yourself and your nation to be the paragon of ethics in the world? neither your governments nor your major companies ethical policies suggest so, at least not from a european (actually, rest-of-the-world) view. to be honest, having a recursively evolving ai instilled with american/christian ethical values around on this planet looks no different to me than hard takeoff. but bear with me, i may be prejudiced by impartial news coverage.
One of the principles I’ve tried to enforce (not always successfully) at The Speculist is that we don’t “do” politics or religion. Obviously, by initiating a series of entries on God and the Singularity, I have decided to put the second restriction aside. But I’m not giving up on the first! Anybody who wants to get into an argument about whether America is paragon of virtue or an evil opressor is welcome to find one of the half a zillion or so blogs where these things are discussed endlessly and have at it. *
But to clarify — I think a soft takeoff is more likely to occur in a setting where people are working on building a friendly AI. Right now, that’s the US. If Japan were one-tenth as interested in the Singularity as they are in robotics, they would probably be the prime contender. (And I still wouldn’t count them out.) Where I have written about the importance of where the Singularity occurs, the choice presented has been the US vs. China. Could a soft takeoff occur in China? Yes. Could a hard takeoff occur in the US? Sure. A US corporation or defense contractor could stumble upon strong AI and launch a hard takeoff while trying to corner a market or build the ultimate weapon (to give just a couple of far-fetched examples). So, then, does it ultimately matter where the Singularity occurs?
Yes.
I maintain that we have a better chance of a soft takeoff if the Singularity takes place where people are purposefully working to create friendly AI. I know of folks in the US working to do this. I don’t know of any in China.
* Which is not to say that I don’t have an opinion on the subject, or that I don’t think it’s important. I do and I do.