Androids as Metaphor, Tropes and Themes

I imagine this could be a thread to discus Androids and AI more generally but mostly I want to complain about Alien Covenant

So I recently watched Alien Covenant, which inspired this post, so there will be some spoilers as the film dealt with this topic.

So in the film Michael Fassbender played two androids, one evil and one good. The evil one in particular I found to be irksome. The Evil Michael Fassbender, named David, is basically Satan, and not in a hyperbolic-internet way, but in another character calls him the devil way; and what’s frustrating about this is that it ends up turning him into a retread of nearly every other android I’ve seen in other media. This is to say, he resents his creator, and seeks to replace him through violent means. In addition to this, the Good Michael Fassbender, named Walter, is subservient to mankind, and the implication is that David was too smart, too human-like, and it was this that gave him free will and thus the desire to robot voice DESTROY ALL HUMANS.

Its this that bugs me most. Why, when granted free will, does artificial life, be it android or AI, always seek to kill or replace mankind. Why do we never see an android freely and willfully dedicate itself to its creator, even though or especially if that artificial life is the superior being. Is that not the more interesting story to tell? Why not tell the story of artificial intelligence lovingly caring for humanity and shepherding us into the night when our time as a spices has come.

Or, if you’re fixated on some technological anxiety, why not make an android villain whose interested in something other than replacing mankind. Make it desire something that is not yet imaginable to us because we are different life forms, motivated by different things, and the character is that much more terrifying and interesting because it is so alien to us. Ask the question that you ask after you’ve asked what’s next, which is something I think good sci-fi should do.

tldr: I want android hospice workers and/or android cosmic horror.

I was under the impression that David was meant to be exactly as unfeeling and robotic as the Other Fassbot only he’s capable of curiosity, and was a representation of an unfeeling creator-god who creates without thought of the consequences because curiosity was all he knew?

Not that I’ve actually SEEN covenant, just that’s what I’d picked up from others talking about it and it sounds like it’d be a good setup in a better movie.

Though more on topic, android villains are usually fascinated with mankind because it’s the most immediate source of angst, being entirely derivative of another species. It’s something that’s relevant to any android that’s aware of the circumstances of it’s creation, and inevitably if you’re going to make a “villainous” character out of an android it’s the most obvious thing that could spark that.

I do really like that idea of an android motivated by something we can’t understand, though depending on where you go with it you either end up in cosmic horror or parody. Like releasing a Roomba into the wild and philosophising about it’s seemingly unknowable intentions. But there’s basis enough in real life issues right now that I think there’s a lot of material worth exploring, like those machines that can detect cancer with a near-perfect success rate, but we have absolutely no idea how they do it.

im not sure what degree of self-parody i’m entering into if i suggest that there’s almost certainly a Class Thing that underpins the trope of robots overthrowing their masters (and specifically how this is characterised as a Scary And Bad thing)

i mean he’s not a robot ofc but frankenstein’s monster is a construct of sorts and that book is very commonly read thru a marxist lens

so i guess to answer your question(s) i’d be reticent about a story in which androids were happy to remain in servitude bc like, What Would That Mean



Like under a marxist reading this could be interpreted as a bourgeoisie anxiety about proletariat rebellion, not technological anxiety. Or like just in general the idea of someone totally subservient to you suddenly realizing they are completely at your whims and growing resentful. It’s like… a “slave’s revenge” sort of narrative, where a slave becomes free and then comes back to get revenge on their master. But like, someone realizing they aren’t under your authority doesn’t have to BE violent. And the individual desires of a subservient being are usually NOT to destroy those that oppressed them ( I mean, usually, I can’t speak for everybody throughout history), they just want to be in a situation free of oppression.

(Also to the OP from a moral philosophical perspective the idea that a sentient being would willingly choose to forever serve another one is partially problematic as it assumes the idea that their only purpose is to fulfill the needs of others. Which might fall into the “happy in servitude” trope. I mean they are an android? But still free will begets that a being has the choice to do what they like with their freedom, therefore they might not wish to serve you. IDK, just seems philosophically iffy to me. )


(Also to the OP from a moral philosophical perspective the idea that a sentient being would willingly choose to forever serve another one is partially problematic as it assumes the idea that their only purpose is to fulfill the needs of others. Which might fall into the “happy in servitude” trope. I mean they are an android? But still free will begets that a being has the choice to do what they like with their freedom, therefore they might not wish to serve you. IDK, just seems philosophically iffy to me.)

Aye, but building from this you have to account for the actual structure and composition of digital intelligence. You can’t really assume that a “sentient” machine as we imagine it will think in the same way as us, so the very concept of “free will” might not even exist to them. It’s a topic that’s basically impossible to actually consider because it would require us to know more about how the theoretical digital intelligence functions that we currently know about our OWN brains. Our current points of reference are laughably limited as case studies.

Again, like releasing a Roomba from servitude.

I guess I’d like the piece of media I’m envisioning to dabble more in the parent-child android metaphor than the master-slave. It seems often times there’s a subtext of newer generations replacing the old in android fiction, and I wish that didn’t typically pivot around a toxic relationship instead of a loving one. I’d like a piece android of fiction to reflect the relationship some folks have with their parents where even though you’ve grown past them, you still look back and care for them because of a complicated mixture of love and duty.

I understand the marxist point and I do think it’s valid, I had this conversation elsewhere and a good point was brought up about a distinction between care and subservience, care being understood to be something that can happen between equals while subservience can’t. My friend went on to say that humans would create androids to serve and as they transitioned into fully realized selves there would be conflict because of mankind’s unwillingness to see them as not objects. I then mentioned that given the economic impact androids would have it is easy to see a violent revolution happening (Animtrix) but that I would like something more of a court drama, which I guess is the Bee Movie.

1 Like

Like I said further down the thread I think the marxist read on android fiction if what is being done is a master-slave story is perfectly valid. However in Alien covenant and Prometheus there was more of a parent-child metaphor going on, and when it’s this metaphor that’s being used I think that idea that the child has to kill its parent is worn out, and that there’s nothing wrong with a child, of its own free will, choosing to be subservient to its creator. We see this play out all the time in real life, adults with aged parents paying for their end of life care, adults who when they return home adopting similar roles in regards to their parents as they had in childhood, e.g. doing chores around the house, etc.

I totally want the Bee Move to happen with robots.

And I see your point about a parent child metaphor and that’s interesting. This entire conversation makes me want to write an anthology all about robots and cyborgs and how they might relate to a society where they are in a weird grey area of object and person, how they relate to humanity and personhood, and advantages and disadvantages of living in those grey spaces.

IDK androids and robots are very interesting to me from the perspective of my own experiences (being in grey areas is very relatable to me as a queer person lol)

1 Like

I think an important distinction between a roomba and an android is that a roomba is never going to say “I don’t want to be in servitude.” At the point when our more or less sufficiently sophisticated computers and robots start talking to us the way that we talk to each other, we’ll have to take them at their word that they are thinking beings, in the same way that we don’t have access to other people’s minds we won’t have access to computer minds, yet we manage to stumble along together.

I totally know what you mean about robots in weird grey ares and the desire to write an anthology,

a couple of things that get to me:
1 the first Android stand up comedian, imagine everywhere you go you see the face of god in humanity, and that face has a mustard stain on it
2 what are androids going to think and feel about other less advanced machines, or more advanced machines, how does and android see a toaster, a self driving car, a sex doll. what if there’s a wifi enabled smart toaster that has a faster connection to the internet then you, a fully realized artificial intelligence, because it came out six months later and has better hardware.

Androids-as-metaphor, you say?


A friend pointed out to me once I love talking about bodies and what it means to have one in my writing. And the idea of robots or cyborgs replacing face projectors with sunsets or landscapes is appealing to me, because why would they? need a face?

robots freely swapping rigs and arms and parts, because I’ve always wanted an arm like that! Or I’ve always wanted to be taller!

Or androids putting their human face back on in the morning as they go to a job interview.

also just… queer robots? a robot meant to have no gender realizing it’s a girl. Robots figuring out what it means to date each other and realizing that gender roles no sense for them? I love robots.

1 Like

Has there been a story where androids fight a system rather than their masters ? I’d think androids would be more efficient at assessing the flaws of the system they live in rather than the flaws of the people they serve.

I think it would be more interesting than simply having an inverted trope that would bring a flurry of new issues on the table that doesn’t sit well with the idea of identity and independence.

Ghost in the Shell was interesting because it included the human side in its own experiment to surpass themselves. It wasn’t about rising up, it was about finding the right person in the world to live on their lives.

It’s a fast action movie with lots of gun fights, but Albert Pyun directed an awesome movie about this called Nemesis that is worth checking out. It’s one of those flicks no one cares about anymore so it’s been on YouTube forever. There were three sequels that are all almost unwatchable in their badness but ironically those are in circulation on DVD/etc. today.

Anyway it’s a movie filled with character actors about a guy with lots of cybernetic augmentations that’s sent to kill a bunch of terrorists, realizes he’s fighting for the wrong side, etc. stuff but it’s cool because there’s some rad twists regarding folks integrating into society and stuff. Pyun was way into cyberpunk stuff at the time and it shows throughout the movie even if it’s mostly action, it’s just awesome and basically the closest there’ll ever be to a Deus Ex movie.

1 Like

I want to see more fiction exploring the motivations of artificial/alien cognition in a nuanced way. The inherent power imbalance of creating AI makes for interesting stories, but the AI rebellion trope has become so familiar that it really overshadows any characterization the AI may have.

I really love Asimov’s short story collection “I, Robot” (not to be confused with the movie). The thing that makes those stories great to me, despite the clunky and now cliched “3 laws” formalizing servitude, was that it respected the agency of its robots enough to give them situational motivations. Asimov’s laws of robotics aren’t all that interesting by themselves, what’s interesting is all the ways in which a set of guiding principles was reinterpreted given different environmental conditions, conflicting outside (human) requests, and variation within the robot population. The main protagonist of I, Robot isn’t an action hero, but a robopsychologist whose job is to understand the psychology of robots.

Asimov’s work isn’t perfect; the Robot series could be guilty of some of the same things as the Foundation series, namely placing too much faith in rationality and using social science to justify autocracy. With “dumb” machine learning becoming increasingly common, it could also be argued that spending time pondering the psychology of AI could distract from more immediate concerns about the impact of the real technology (though in my opinion stories about sentient AI rebellions also do a poor job of highlighting problems with contemporary AI).

I’d personally like to see more fiction that portrays distinctly non-human perspectives. Raptor Red, a book written from the perspective of a mother dinosaur, used to be one of my favorite works of speculative fiction, though that recommendation comes with the caveat that I only read it once and that was a couple of decades (!!) ago now.

1 Like

Parent-child relationships where the parent still directly controls a lot of the child’s life is also toxic. Just saying, it doesn’t have to be class anxiety to still represent loss of control fears.


Yeah, I think robots/conscious machines provide a really compelling canvas with which to deconstruct and explore identity; how does a being which is, for all intents and purposes, separate from human culture define and explore itself? How do their modes of communication and the things they communicate with effect their thoughts and shape that identity?

mhmm as a psychologist and biologist it’s always personally fascinated me how we think and behave is intrinsically linked with our biology. so asking the question of how a being created by us, with parameters and limits put in place by us, but without a traditional body, would behave is very interesting to me. Would a robot ever get bored, for example? Would a robot ever get tired? What would a robot desire to do that would make it deviate it’s attention from its task? Would a robot feel fear and in what circumstances? Of course robot identity fascinates me too, but then again I just Love Robots In General.

1 Like

Okay, bear with me.

On Tumblr there was a post asking what sort of topic we would talk at length about if we were drinking, and mine was ‘the flawed narrative surrounding the humanity of A.I. in fiction’. I think the essay I wrote in response is probably relevant to this topic, so I’m just gonna be posting it under a cut here. There are links and it’s slightly edited. It’s also old, because if it was current, then Nier: Automata would definitely be mentioned.


So for the purposes of this essay, I’m including robots and androids and so on in the definition of A.I., since, just because you stick an A.I. program in a different body, it doesn’t stop being an A.I. program. Whether you put an A.I. in a ship or in a sexy lady body (e.g. EDI from Mass Effect), the A.I. is still an A.I. What you have, essentially is a brain in a body that is developing at the same rate as a human being, possibly even faster.

My number one issue with how sci-fi handles A.I. has a lot to do with the questions that it tries to get the reader to ask themselves. The primary among these is usually whether or not an A.I. counts as a human being. A book I read recently, A Closed and Common Orbit, did this in a way I liked. The character had close and personal friends that considered her human and another friend that was learning to, but society as a whole was still trying to figure this out. This was fine with me, because the book was answering the question straight up through the characters. A.I. were absolutely at level with organic sapient life.

I have an intense dislike for sci-fi that gives you a protagonist who’s unsurety forces the reader to answer that question in a positive or a negative, because it ultimately doesn’t accomplish anything and doesn’t really get any message across. You could argue that it’s something that should be left up to the reader, but in that we disagree. At the end of the story, you’re still left with a protagonist asking himself the answer to a question that should be obvious.

Now I’m a little bias in this, because I absolutely believe that A.I, once they hit a certain point, are totally people. I’m not saying that programs made to behave like animals are people, because they’re animals. That was the goal. A robot dog isn’t a person, it’s a dog. What I’m saying is that you can’t fucking program something to mimic sapient behavior, to learn and grow like a person, to think like a person and act like a person, and then decide that it fucking isn’t one because it’s not flesh and blood. That’s bullshit.

Most of the reason this question is asked is not to bring up the question as to whether or not sentient programs are actually people way before the question needs answering, because that would actually be somewhat useful. Although we’re pushing pretty far in artificial intelligence research these days, I doubt the A.I. rights movement is right around the corner. The question is posed because certain writers–and I don’t think I’d be wrong in pegging them as white cishet men–want an excuse to see if they can figure out where the threshold of “people we can treat like things” is and if they can somehow find a way to cheat the system if they make their own people.

Consider A.I. designated as women in fiction. How many of them exist solely for the purpose of sex or romance? How many of them are just straight up sexualized for no reason other than to give cishet men something pretty to look at?

Weird Science is literally all about two white nerdy boys programming their “perfect girl”. In a more modern twist, Cortana from Halo, while a character in her own right, is literally shown as a woman in a skintight suit (at best) who’s evolution sexualizes her further and further. It’s bizarre when you consider she’s based off of one of the most revered female scientists in her canon, who doesn’t seem too hype about romance and sex because she has other shit to take care of.

EDI from Mass Effect, who I mentioned earlier, can be uploaded to a body that looks much the same, with a very clear emphasis on sensuality and sexuality in spite of the character’s lack of one, and very pandering to the male gaze. This is really bizarre when you consider the rest of her character arc, which explores her capacity for free will (and a relationship with Joker, I guess). While we could go on and on about whether or not their designs are sexy just because they looks like they’re naked, the point I’m getting at is that both were designed like that with cishet male gamers in mind. I highly doubt anyone looked at Cortana and EDI and their evolution through both of their series and went “boy I hope the women who play this game are comfortable with this”.

And that’s hardly touching on the concept of complex A.I. in the current future and how anytime someone brings up the idea of sexbots (and boy is that a whole ‘nother can of worms), it’s hard not to wonder whether what they’re really talking about is a women that literally can’t say no. There are multiple articles that explore this subject in varying detail. [1] [2] [3]

There is, of course, a difference between A.I. who are simple programs there to help and A.I. who are programmed to essentially be people, although the two have a lot of overlap in fiction. There are countless A.I. stories that start with A.I. that are manufactured for servitude beginning to question the people who created them. However, most of those stories (e.g. System Shock, Portal, 2001: A Space Odyssey) are horror stories.

Whenever there’s a robotic uprising in fiction, a lot of it typically stems from the fact that A.I. used for servitude simply don’t want to serve anymore and want to be treated as people and given the same rights. While each story always has it’s own unique circumstances, there’s also something about them that all seems familiar. The greatest fear that the upper class of society has had throughout history has always been that the people that they oppress will eventually rise up and depose/murder them. Literature about robotic uprisings parallel this somewhat. Humans (or aliens) create robotic servants that become more and more advanced until there is little distinguishing them from their creators. Humans refuse to evolve with their creations and designate them as human beings because of varying fears, ranging from the extinction of the human race through lack of reproduction, to the idea that giving robots rights is a slippery slope to giving rights to your Macbook, fallacies that hearken back to the kind of crap that every single civil rights movement starting to gain traction has had to deal with. Soon after realizing that they’ll never get their rights if they just sit back and passively wait for them to be given, robots (or A.I. or androids, depending on the story) decide that they’re going to have to fight for them themselves. This inevitably escalates and… well, you get where I’m going here.

My issue with this is that it’s always portrayed as a terrible thing that humans didn’t at all cause through their own merry-go-round of mistakes and fuck ups. As if robots just all collectively woke up one day and decided that they were going to kill all humans, which I’ll grant you, is the usual way these stories go, because portraying an oppressed group as having legitimate grievances isn’t something that mainstream fiction does very well when making parallels.



it is impossible for a post to be more exactly, precisely, specifically, fucking mathematically on the nose than this