A Leap Into the Unknown

A headline in this morning’s NYT: “If AI systems become conscious, should they have rights?”

Whew!

This raises a few questions. One of course is “What does it mean to be conscious?”

Clearly, AI systems will be able to become superhuman in their responses, and, yes, their cognative (using the term broadly) abilities. But to be “conscious”, I believe they have to “know” (almost as an onlooker) that they are thinking, rather than just reacting to certain stimuli. Does that make sense?

And can that ever happen? For most of human history, the answer would be “of course not”. But today? One never knows.

This in turn leads to another question. People are autonomous beings. They are, by and large, in control of themselves. But AI systems are not autonomous. Even if they can think and compute better than humans, they are not their own bosses. By that I mean that AI systems are the property of something else, and that something else will presumably be owned by humans. If that’s not the case, we will really be in trouble.

That brings me to slavery. When there was slavery, slaves basically had no rights. They were owned by other people. Once slavery was abolished, formerly enslaved people obtained the rights of other humans. They were now autonomous, not owned by others.

But AI systems, at least as far as we can see ahead, will be owned by others, and not autonomous. They will be akin to very smart enslaved people, of whom there were undoubtedly many.

Of course, in today’s America, humans are not the only holders of rights. Our courts have given corporations rights, including the rights of free speech. Some of our courts have given fetuses certain rights. Other courts have given certain animals certain rights. Corporations are not thinking beings. Whether fetuses or animals are conscious in the way we think of that term is debatable,  and debated.

If AI systems are given rights, does that mean they will be able to have their own lawyers to protect their rights? And who will hire them? Who will pay them? And, I guess, you have to ask whether the lawyers will be human or will themselves be AI?

And will those lawyers (whomever or whatever they may be) be able to argue that the AI systems, at least once they are able to clone themselves, or even improve upon themselves, should no longer be controlled by others, but should be autonomous.

But of course, under our legal system, these positions will be up to judges, right? But who knows? Will those judges be human, or will they be AI?

Oh, what a tangled web we weave.


Leave a comment