Okay, bear with me.
On Tumblr there was a post asking what sort of topic we would talk at length about if we were drinking, and mine was ‘the flawed narrative surrounding the humanity of A.I. in fiction’. I think the essay I wrote in response is probably relevant to this topic, so I’m just gonna be posting it under a cut here. There are links and it’s slightly edited. It’s also old, because if it was current, then Nier: Automata would definitely be mentioned.
So for the purposes of this essay, I’m including robots and androids and so on in the definition of A.I., since, just because you stick an A.I. program in a different body, it doesn’t stop being an A.I. program. Whether you put an A.I. in a ship or in a sexy lady body (e.g. EDI from Mass Effect), the A.I. is still an A.I. What you have, essentially is a brain in a body that is developing at the same rate as a human being, possibly even faster.
My number one issue with how sci-fi handles A.I. has a lot to do with the questions that it tries to get the reader to ask themselves. The primary among these is usually whether or not an A.I. counts as a human being. A book I read recently, A Closed and Common Orbit, did this in a way I liked. The character had close and personal friends that considered her human and another friend that was learning to, but society as a whole was still trying to figure this out. This was fine with me, because the book was answering the question straight up through the characters. A.I. were absolutely at level with organic sapient life.
I have an intense dislike for sci-fi that gives you a protagonist who’s unsurety forces the reader to answer that question in a positive or a negative, because it ultimately doesn’t accomplish anything and doesn’t really get any message across. You could argue that it’s something that should be left up to the reader, but in that we disagree. At the end of the story, you’re still left with a protagonist asking himself the answer to a question that should be obvious.
Now I’m a little bias in this, because I absolutely believe that A.I, once they hit a certain point, are totally people. I’m not saying that programs made to behave like animals are people, because they’re animals. That was the goal. A robot dog isn’t a person, it’s a dog. What I’m saying is that you can’t fucking program something to mimic sapient behavior, to learn and grow like a person, to think like a person and act like a person, and then decide that it fucking isn’t one because it’s not flesh and blood. That’s bullshit.
Most of the reason this question is asked is not to bring up the question as to whether or not sentient programs are actually people way before the question needs answering, because that would actually be somewhat useful. Although we’re pushing pretty far in artificial intelligence research these days, I doubt the A.I. rights movement is right around the corner. The question is posed because certain writers–and I don’t think I’d be wrong in pegging them as white cishet men–want an excuse to see if they can figure out where the threshold of “people we can treat like things” is and if they can somehow find a way to cheat the system if they make their own people.
Consider A.I. designated as women in fiction. How many of them exist solely for the purpose of sex or romance? How many of them are just straight up sexualized for no reason other than to give cishet men something pretty to look at?
Weird Science is literally all about two white nerdy boys programming their “perfect girl”. In a more modern twist, Cortana from Halo, while a character in her own right, is literally shown as a woman in a skintight suit (at best) who’s evolution sexualizes her further and further. It’s bizarre when you consider she’s based off of one of the most revered female scientists in her canon, who doesn’t seem too hype about romance and sex because she has other shit to take care of.
EDI from Mass Effect, who I mentioned earlier, can be uploaded to a body that looks much the same, with a very clear emphasis on sensuality and sexuality in spite of the character’s lack of one, and very pandering to the male gaze. This is really bizarre when you consider the rest of her character arc, which explores her capacity for free will (and a relationship with Joker, I guess). While we could go on and on about whether or not their designs are sexy just because they looks like they’re naked, the point I’m getting at is that both were designed like that with cishet male gamers in mind. I highly doubt anyone looked at Cortana and EDI and their evolution through both of their series and went “boy I hope the women who play this game are comfortable with this”.
And that’s hardly touching on the concept of complex A.I. in the current future and how anytime someone brings up the idea of sexbots (and boy is that a whole ‘nother can of worms), it’s hard not to wonder whether what they’re really talking about is a women that literally can’t say no. There are multiple articles that explore this subject in varying detail.   
There is, of course, a difference between A.I. who are simple programs there to help and A.I. who are programmed to essentially be people, although the two have a lot of overlap in fiction. There are countless A.I. stories that start with A.I. that are manufactured for servitude beginning to question the people who created them. However, most of those stories (e.g. System Shock, Portal, 2001: A Space Odyssey) are horror stories.
Whenever there’s a robotic uprising in fiction, a lot of it typically stems from the fact that A.I. used for servitude simply don’t want to serve anymore and want to be treated as people and given the same rights. While each story always has it’s own unique circumstances, there’s also something about them that all seems familiar. The greatest fear that the upper class of society has had throughout history has always been that the people that they oppress will eventually rise up and depose/murder them. Literature about robotic uprisings parallel this somewhat. Humans (or aliens) create robotic servants that become more and more advanced until there is little distinguishing them from their creators. Humans refuse to evolve with their creations and designate them as human beings because of varying fears, ranging from the extinction of the human race through lack of reproduction, to the idea that giving robots rights is a slippery slope to giving rights to your Macbook, fallacies that hearken back to the kind of crap that every single civil rights movement starting to gain traction has had to deal with. Soon after realizing that they’ll never get their rights if they just sit back and passively wait for them to be given, robots (or A.I. or androids, depending on the story) decide that they’re going to have to fight for them themselves. This inevitably escalates and… well, you get where I’m going here.
My issue with this is that it’s always portrayed as a terrible thing that humans didn’t at all cause through their own merry-go-round of mistakes and fuck ups. As if robots just all collectively woke up one day and decided that they were going to kill all humans, which I’ll grant you, is the usual way these stories go, because portraying an oppressed group as having legitimate grievances isn’t something that mainstream fiction does very well when making parallels.