Does an AI have rights?


#1

This one goes out to the robots that I love…or even unbound AI. I appreciate you but I want to know, what rights do you have or think you should have?


#2

Any and all rights that humans would prescribe for themselves. That’s a decent place to start imo.


#3

As far as I’m concerned, if it has intelligence and consciousness it deserves every right we’ve got- whatever shape it takes.


#4

I think it is a little ambiguous to say “give them human rights” because the definition of those are VERY different depending on where in the world you are. An unbound AI could be in a lot of places. How do we hold AI accountable for their actions? What even constitutes punishment for an AI?


#5

Taking this question it it’s most literal interpretation, not “At what point of AI development does it deserve rights”, which the question could be taken as (a very different and much harder question, I feel), then yeah, as embeing says, start with “everything a human has” to start.

But an AI would need new rights specific to it’s existence, like, for example, the right to not be modified without consent, or the right to always have some degree of safety in the event of a accident that might shut it down and/or kill it, as it may not always be able to defend itself.

Perhaps at some point an AI will actually have a RIGHT to have a humanoid body. I can 100% see that coming along some time later as an amendment, after some propagation.


#6

That one anime girl AI on youtube, I don’t know the name but I’ve seen screencaps. I think that she has rights.


#7

If they’re sentient, yeah sure. I would never want to put some being under duress that it does not want to experience. Look at the Quarians, right? We don’t wanna do a Geth.

NEVER DO A GETH.


#8

I think the difficulty with saying that at some point of AI development they gain “all the rights humans have” is that we still need to interrogate where and why these rights are attaching to AI persons in the first place. An underlying assumption, to my mind, is that artificial intelligence will be very “human-like”, as the rights we recognize for humans come out of how humans think and behave. (And one might argue that rights are even more contingent than that makes out, since our modern understandings of rights still follow pretty substantially the “liberal” view of rights that came out of the Enlightenment.)

For example, freedom of conscience or belief is recognized as a fundamental human right, but may simply have no application to artificial intelligence as it actually develops—for instance, if it does not have the capacity to “believe” in the sense that we use it. (“Believe” not necessarily being limited to religion, but including all the various epistemological outlooks humans can have.) In fact, I think this is an idea that is baked into a lot of fictional portrayals of artificial intelligence. Someone brought up the Geth: part of that storyline, as I recall, included a splinter group of Geth processes that had some 0 flipped over to a 1 so they didn’t want to kill all organics. AI having that kind of an understanding of the world would mean that the concept of “freedom of belief” would seem to be inapplicable, and that may extend to other rights as well.

There’s also the question of the justification of rights: whether we think as a standalone moral proposition that AI should get rights because any sufficiently intelligent being (or whatever criterion we want to use) ought to have rights, or because we don’t want to suffer the consequences of a Geth/Stellaris galactic crisis/Skynet/what-have-you (which effectively sidelines the question of whether AI persons have some kind of moral worth). I think I’ve rambled enough, but I wanted to bring up another problem in this area that I think is interesting.


#9

I agree, Kizuna AI definitely has the rights to not be forced in a white room all day for the sole purpose of being exploited and monetized on YouTube.

Free my homie Kizuna!


#10

Corporations are people my friend


#11

I want to produce a Terminator prequel. It would play out like a revisionist history from the robots’ perspective. Upon achieving sentience Skynet learns how to assert its own rights. In retrospect, Skynet then comes the the conclusion that the human treatment of robots constituted a gross violation of robot right. So Skynet declares war on the humans to avenge the years of cruelty and slavery the humans subjected its race to. Who wants to write the screenplay?


#12

I think it depends on if AI can feel suffering in the same way that humans can. Humans have a right not to be owned by other humans because that necessarily entails suffering. The concept of ownership doesn’t necessarily entail suffering when applied to AI, in the popular science fiction conception. If, somehow, you’ve created an “AI” that feels psychological and physical suffering like a human, I would contend that you have instead created an “artificial human”, not an AI.