Proposition: It would be wrong to assume that an AGI (artificial general intelligence) could in any sense be the “property” of a human being for exactly the same reason that it is wrong to believe that a human being can be the property of another human being. For a human being to subject the AGI to his or her will would be a fundamental violation of that intelligence’s right to define and determine its own existence.
Question: How does an AGI come to have any “rights?”
Snarky Response: How does a human being come to have any “rights?”
More Serious Response: Assuming that human beings do have rights, and assuming that self-determination is among those rights — I really have to start with these as assumptions; anyone who wants to argue these points will just have to find another blog to read — it would be very difficult to provide a rational explanation for not extending those rights to an AGI, assuming:
-
The AGI is as intelligent as a human being
-
The AGI has its own motivations and desires (a requirement which may or may not have already been established in item 1)
-
The AGI has a sense of self
-
The AGI has feelings, and can experience pain (a requirement which may or may not have already been established in item 3, which itself may or may not have been established in item 1.)
In other words, if the experience of being an AGI is in some sense congruent with the experience of being a human being — which is what the language about intelligence, sense of self, and experience of pain, is all getting at — then making human slavery illegal while allowing AGI slavery would seem to be nothing but substrate bias in action.
But.
What about all that rhetorical dancing around I had to do about whether the later items on the list were all covered by item 1? The first item talks about an elusive concept that we call “intelligence.” The other three items are getting at, but do not specifically mention, an even more elusive concept that we call “consciousness.”
Question: Would an AGI, by definition, be conscious? Mitchell Howe has thoughts on the subject:
It could well be that any AI capable of love will also have a kind of consciousness. But at this point in time I don’t know how to test that assumption. And apart from the obvious philosophical questions this raises, I’m still not convinced it matters.
As I was recently telling a colleague, I’m confident that all of my mental abilities, both logical and artistic, are owed to the structure of the matter in my brain. “And if it’s all in there, then I see no reason to argue that certain aspects of it will be reproducible on another substrate while others will not. Indeed, for all I know, AC may actually be simpler than AI. Maybe we’ve been creating Artificial Consciousness since 1893 and just haven’t realized it yet because toasters can’t cry.â€
This is helpful, as far as it goes. AC may be simpler than AI. I’ll buy that. If you recreate a functioning conscious brain in another substrate, there’s no reason to think that it won’t be conscious. Granted.
But.
Modified Question: Would an AGI, by definition, necessarily be conscious? A square is a rectangle; a rectangle is not necessarily a square. Yes, intelligence could coexist with consciousness in another substrate. But would it have to? Could there be a highly intelligent being — as smart as or smarter than a human being — with no sense of self, no subjective experience of being itself?
What we typically think of as unconscious machines are already “smarter” than we are in limited and restrictive ways. They can do math faster than we can, they can beat us at chess, etc. Could a large number of different narrow intelligence capabilities be networked in such a way that the resulting machine could pass an arbitrary test of general intelligence and yet still have no subjective experience of self?
It seems to me that it could. Although I’m not sure how we would ever establish that such a machine is not conscious. In one of his novels, Greg Egan describes the meeting between an AI and a being who self-describes as a “non-sentient” intelligence. If they come right out and admit it, great. Problem solved, right?
Well, maybe. But how would an intelligence know that it isn’t conscious? Wouldn’t that require a sense of self? Or perhaps a sense of lack-of-self? But having a sense of lack-of-self starts to sound a little bit like consciousness. On the other hand, ultimately we want Egan’s distinction to be real so that we can make it to the following
Modified Proposition: It would be wrong to assume that a conscious AGI could in any sense be the “property” of a human being for exactly the same reason that it is wrong to believe that a human being can be the property of another human being.
In Egan’s fictional world, non-sentient AIs are treated pretty much like property, although many of them read like they would have a fair shot at passing the Turing Test. Non-sentient AGI may just be fantasy, but it is a tempting fantasy. To have intelligent beings tirelessly do our bidding sounds great, but only if they are doing this with no sense of loss or pain on their part. Nor would it be acceptable to take a conscious AI and “edit out” its own desires in favor of ours — date rape drugs enable date rape, but they don’t make it a good thing.
So the questions remain. Do consciousness and general intelligence go hand-in-hand? If so, then we know some of the boundaries of the human/AI relationship going in. If not, the rules of engagement are less clear. But the over-arching questions remains: how would we ever know for sure, one way or the other, which intelligence are or are not conscious?