Episode 421 - But Doctor, I’m Pagliacci’s [SPOILERS REDACTED]!

Content Warning at 1:22:00 for discussion of Violence against a pregnant woman, Death, Incest, and Drugging

This is a companion discussion topic for the original entry at https://play.acast.com/s/vicegamingsnewpodcast/episode421-butdoctor-i-mpagliacci-s-spoilersredacted-

Re: Rob’s question about are there impressive machine learning applications that don’t require using large data of questionable quality and origin, the answer is absolutely yes.

Big data sets are attractive because they can yield models with good predictive ability without needing explicit domain knowledge of the problem at hand. These methods work well on mundane problems that have clear outcomes (is this a cat? yes/no), but are terrible on problems with a good deal of uncertainty (how will this patient fare on this drug, given their medical history?). In my experience, working on messy real world problems, domain knowledge of a problem + a small data often yields better results than a a stock machine learning method applied to a large data set.

Unfortunately it is often easy for someone to build a ML model without domain expertise, based on a poorly defined outcome and then set it on the real world, where it inevitably causes harm. It doesn’t help that ML and AI are buzzwords in industry and academia that attract funding (As a statistician, I mentally substitute “magic” for “ML” and “AI” when I’m reading proposals.)


It’s a funny coincidence that immediately after the debate about Boyfriend Dungeon’s CW’s, we get an indisputable example of overly vague content warnings with 12 Minutes


Something listening to the Twelve Minutes spoilercast, as well as watching other people play through the game, has illuminated for me is just how mentally resigned the big reveal of the game made me. I absorbed all the information in the ending I got and the true ending (even though apparently I never actually saw that ending sequence play out. Same with the one where you’re alone in the empty apartment) but I just didn’t… process any of it. I refused to think about it even one bit! Not that it makes any part of that game’s resolutions any more favorable either way…

Also realized how particularly awful that game is in regards to how it treats the Wife.

Re: a specific interaction in the game the crew was wondering about (same CWs apply)

If you try to shock the wife with the faulty light switch she gets electrocuted and dies instead of being knocked out like everyone else. The game actually foreshadows this if you shock yourself because after that the Husband will remark “It’s a good thing I was wearing shoes or else that could’ve been much worse!” (or something like that). Apparently, the wife only dies because she’s barefoot, which was not something I thought about paying attention to when I tried this out.

Like a lot of the many pointless interactions in this game, this only exists in the game because it is one of many possible interactions afforded by the play space. Except this one isn’t as amusing or innocuous as flushing random objects down the toilet and is one of many awful things you can do to the Wife for… whatever reason.


Read the Kotaku article then watched Limmy work out the twist live. That’s enough for me. Even Shutter Island was better than this.


Prestige Games, everyone!

There isn’t a thread for Patrick’s article so I’ll put my thoughts here since the podcast talks about it.

Anyone who works in tech needs to be working under the assumption that what they are working on is or will be used for military application at some point regardless of what industry you work in. To think otherwise is to be naive and to be trying to pass off responsibility. Anything you do is going to be either directly used or used through proxy.

Let’s use video games as an example. You help create the next Madden, guess who’s streaming Madden and using Madden as a recruitment tool at an event at the local high schools? The military. What do you think happens with code you write at a studio? You don’t own that code, your employer does and do you trust your employer to not sell off code if the government swings by and says “we’re impressed with your network engineering, would you be interested in a nice fat government contract?” The entertainment industry has been tied to the military and the glorification of hurting others since it’s inception and will continue to be. To think video games are immune to this is absurd.

I’m going to keep bringing it up till the day I die but Joseph Weizenbaum, one of the fathers of modern AI, said it best in 1985 in an interview with MIT (emphasis mine):

Q: Did you have these concerns when you were designing the banking system?

Not in the slightest. It was a very technical job, it was a very hard job, there were a number of very, very difficult problems., for example, to design a machine that would handle paper checks of various sizes, some of which might have been crumpled in a person’s pockets and so on, to handle those the way punch cards are handled in a punch card machine and so on. There were many very hard technical problems. It was a whale of a lot of fun attacking those hard problems, and it never occurred to me at the time that I was cooperating in a technological venture which had certain social side effects which I might come to regret. That never occurred to me; I was totally wrapped up in my identity as a professional, and besides, it was just too much fun.

Q: What about computers and the military?

The computer was of course born to the military, so to speak. In the United States, the first fully functioning computer was created in order to compute ballistic trajectories. And in England, to help decipher military codes, Carl Zuse built his computer in order to deal with mathematical problems which arise in the design of military aircraft.
It is also safe to say, it is simply a matter of fact, that to date weapons which threaten to wipe out the human species altogether could not be made and could certainly not be delivered with any sort of precision were it not for the computers which guide these weapons.

The computer is very deeply involved with the military. Today it counts as the beating heart of virtually every modern military system you can think of with the exception of the foot soldier.

Q: So to be a computer science professional very often means to be working in defense?

I would endorse that sentence, except that I would wish either that the last word be put in quotes, or that you change the sentence to read “…to be involved in the military.”

And you know, “the military” certainly is very considerably less euphemistic than to say “defense.” Now I understand that we’re threatened by great forces, like Grenada, Cuba, and Nicaragua, for example, and we have to defend ourselves against them, but the terminology “the military” still hides the reality.

When we think today, for example, of the masses of computers in helicopters, and in all sorts of mobile things like tanks and airplanes, and we think of the many places on earth where these machines are being used every day, whether it is in Afghanistan or someplace in Africa, then the term “the military” also deserves to be replaced with something considerably harsher.

Instead of saying the computer is involved with the military, say the computer is involved with killing people. It is only when you come to that vocabulary, I think, that the euphemism begins to disappear, and I think it’s very important that it disappear.

Q: How can people continue to do this, knowing that the things they build will be involved in killing people?

People have a series of rationalizations. People say for example that science and technology have their own logic, that they are in fact autonomous. This particular rationalization is profoundly false. It is not true that science marches on in defiance of human will, independent of human will, that just is not the case. But it is comfortable, as I said: it leads to the position that "if I don’t do it, someone else will."

Of course if one takes that as an ethical principle then obviously it can serve as a license to do anything at all. “People will be murdered; if I don’t do it, someone else will.” (CW: Violence to women)“Women will be redacted; if I don’t do it, someone else will.” That is just a license for violence.

Other people say, and I think this is a widely used rationalization, that fundamentally the tools we work on are “mere” tools; This means that whether they get use for good or evil depends on the person who ultimately buys them and so on.

There’s nothing bad about working in computer vision, for example. Computer vision may very well some day be used to heal people who would otherwise die. Of course, it could also be used to guide missiles, cruise missiles for example, to their destination, and all that. You see, the technology itself is neutral and value-free and it just depends how one uses it. And besides – consistent with that – we can’t know, we scientists cannot know how it is going to be used. So therefore we have no responsibility.

Well, that is false. It is true that a computer, for example, can be used for good or evil. It is true that a helicopter can be used as a gunship and it can also be used to rescue people from a mountain pass. And if the question arises of how a specific device is going to be used, in what I call an abstract ideal society, then one might very well say one cannot know.

But we live in a concrete society, [and] with concrete social and historical circumstances and political realities in this society, it is perfectly obvious that when something like a computer is invented, then it is going to be adopted will be for military purposes. It follows from the concrete realities in which we live, it does not follow from pure logic. But we’re not living in an abstract society, we’re living in the society in which we in fact live.

If you look at the enormous fruits of human genius that mankind has developed in the last 50 years, atomic energy and rocketry and flying to the moon and coherent light, and it goes on and on and on – and then it turns out that every one of these triumphs is used primarily in military terms.

So it is not reasonable for a scientist or technologist to insist that he or she does not know – or can not know – how it is going to be used.

I want to go back to Patrick’s article and comments made in the podcast about the individual in AI who didn’t realize that their work was going to be used for killing people. It would be easy to ridicule this person about not being forward thinking about where AI leads regardless of intended application but I quite honestly think almost anyone working in tech has not really stopped to think about just what an impact their work has or will have.

The fact of the matter is everyone in tech is either directly or indirectly involved in technology that is ultimately used for killing other people.

It is also not just military and oppressive government forces. How many people would you say have died due to social media? How many people have died from information in an Excel workbook? How many people have died from a service that utilizes AWS? Is it right to say anyone involved in the Apache HTTP server project is ultimately responsible for all manners of vile and harmful websites hosted using it? Is it right to say that you invented a tool that was meant for good and it was others that took it and made it bad? Are you still not in some way responsible for the product of the tool you helped create?

This isn’t to say I think all of us in tech should just quit our jobs and take a vow to never touch a keyboard again. I think we do need however to be more aware of what we are doing to the world and be actually honest with ourselves about what it is we have impacted and have the potential to impact. I believe it to be incredibly unethical to take a stance of ignorance is bliss in tech or to be a denier of what you have contributed to at large.


Does this apply to everyone in tech, or just everyone in tech in the US?

I believe it to apply to everyone in tech globally. Technology is not created in a vacuum and it is not constrained by borders. We go to conferences with one another, we share knowledge, ideas, teach each other new techniques and invent new tools for others to use. We are all responsible for one another at a certain level.

I think using countries when talking about larger corporations like Amazon or Microsoft is almost not appropriate because they are megacorps global companies with offices all over the world. Can you say Microsoft is really a US company when they have an office in every country it seems? Even a lot of smaller to midsize companies are now staffed internationally thanks to telecommuting. Every country at this point has been complacent in the ethics in tech issue.

This isn’t to say I don’t think there’s an issue with US tech companies just that I feel saying US based tech companies are the issue is an easy out and a way to clear ones conciseness without really understanding that we all have an impact on technology as a whole.


I absolutely agree that tech all over the world has those issues and it’s irresponsible to pretend that they don’t apply to you if you are not in the US. My previous employer was a sub-contractor of a military contractor for the germay military, a job I noped out of as soon as I began thinking about any leftist ideas. My current employer is magazine publisher for agricultural topics and just because there is no n-point connection to state violence that I can see doesn’t mean that there isn’t one.

That being said I believe that framing the observation that a lot of the companies dominating and pushing aspects that generate those issues are US based as an easy out is in itself an easy out of critique of US exports and Imperialism, traditional, cultural and otherwise. At least from my cynical non-US perspective.

1 Like

When I heard reactions to this game, I wanted to know if the twist was just tasteless and offensive, or tacky and stupid.

Turns out, it’s both. Horray!

I honestly cannot imagine what about this twist and this story drove people to put like eight years of work into it.


You’re not wrong! I think everyone can be guilty with certain companies/individuals being more guilty then others if that makes sense. There’s a difference between negligence and actively aiding and US companies tend to fall more on the actively aiding side.

1 Like

Like Rob I was really expecting it to just be a tense little thriller about all the ways a charged situation like a home invasion can go wrong. The twist at the end is utterly needless and totally unearned.

I also went back and watched some of that old Quick Look. And I while I do understand that game development is extremely complicated and time consuming I’m kind of left wondering what they spent the last six years working on. Was the plot even more contrived? Or did it eat itself as development went on?


My immediate off-the-cuff totally tossed out suggestion is they struggled so much to get the actors in for recording that they increasingly escalated the stakes of their stupid plot in the intervening six years and this is where it ended up.


About Rob’s question, I think it is actually 2 questions in one that could become conflated. In one sense I think he is 100% correct- Machine Learning by definition is exploratory and not able to generate logical a priori hypotheses that are then properly powered and testable with ‘small’ amounts of data. So yes, it always requires large data* which is going too increase the chance of it being poor/questionable because you just need more of it.

I 100% agree that substantive knowledge is needed to make real progress. I also think finding good operationally defined ‘clear outcomes’ that are generalize to be useful, is very rare. Like with your example, is a photo of a cat a cat? Is a picture of a cat drawn by a 5yr old a cat? [I am purposely not picking at the idea of something being clear because it is dichotomous, because I don’t think that is what you intended].

*This is clearly gesturing at a conversation of quantifying and qualifying “what is a ‘big’ dataset/what is 'big data?”.

1 Like

Cado deserves an award for the editing and production on this one.


Could recommend this episode for fans of the podcast genre
-ba dum pa

1 Like

Yes! The reversed “miss you” (by bo en from the EP pale machine) at the top of the segment was brilliant.


So obviously massive spoilers for 12 minutes, a game I watched a play through of after getting frustrated with restarting so many loops:

Here be spoilers

Tangent incoming: there was an old, tasteless joke book I had as a kid that had a bunch of the kind of edgelord humor that only works on preteen white boys and the adults that never moved past that phase. One joke was about a young man who met a girl and fell in love, and before he was going to ask her to marry him he goes to tell his father, and his father says “son I had an affair on your mother when you were young and actually that woman is your sister.” Heartbroken he decides to breaks it off and when his mother sees how sad he is he confesses everything only for the mother to say, “don’t worry about it son, I too had an affair and you aren’t really your father’s son” Insert rimshot here.

The little story that 12 minutes manages to tell poorly is basically the setup for the punchline, but taken seriously. It is told in such a serpentine way that it’s basically unrecognizable without the option of just being able to fast forward and rewind the plot beats at your heart’s content.

I think everyone was somewhat right and somewhat wrong in their interpretation. The genesis of this story is that Man has fallen in love with, presumably married, and definitely impregnated Woman, who is unbeknownst to both of them his sister because of an extramarital affair from their father. The only “real” thing happening is the well lit room full of books and the bald father talking it out. He’s not Man’s therapist because he outright states it’s his daughter you’re both discussing, although confusingly he may be one by trade. For reasons unknown, the father has become aware of your relationship and Woman’s pregnancy, and has decided to drop this all on Man to make him aware of the situation.

My take on what’s going on is that the loop, as it were, is a manifestation of an ongoing conversation between Man and his father in that office with the books in the daylight. Man keeps trying to come up with scenarios where they could be happy together, where they could be a blissfully unaware family, and the father keeps shooting them down. None of them are real in the sense that he’s spinning a yarn about eating dessert and the father interjects about a cop showing up, but a sort of abstraction of that concept. Man keeps trying to explain how this could all work, the idea of a happy couple celebrating a pregnancy, and Cop an embodiment of the father comes in and ruins it before it can even begin. You can imagine the counter arguments or scenarios that conversation might have and the ways it shows up in the story. What if we just never knew? Well the father knows and he’ll intervene eventually no matter what. What if we all somehow made peace with one another and moved forward? There’s no way that’s even possible so don’t entertain it. What if I make it so you can never tell her? So you’re just going to live the rest of your life hoping she never finds out that you murdered her father AND that you’re siblings? You get the idea. The father manifests as a brute who will destroy your family to save his daughter because in real life the father, like a huuuuuuuuge asshole, explicitly says that the worst part about telling his son that he knocked up his own sister is knowing that he’ll disappoint his daughter when you leave out of the blue.

Where interpretation comes in, or just plain shoddy plotting, is that the ending has the father explicitly calling things out as imaginative versions of the future, while at the same time by definition the night you’re reliving has to be in the past because the father already knows Woman is pregnant. I imagine that the time loop is the reality of the situation being abstracted through the rewriting of a once happy memory, the moment where Man learns he’s about to be his father, and now the truth of the matter is intruding into all of that and tearing it all down.

So there is no time travel, the only reason the clock is relevant is because it’s an anchor point into the present reality, something to focus on that snaps him out of his mental haze. The Man didn’t forget he killed his father because he never actually did and never actually does, it’s just a potential future he imagines where he DOES kill the father to keep the secret buried and what that does to his family.

None of this really fits together all that well mind you, honestly the conceit works better if the characters weren’t already aware of the pregnancy although it’s hard to imagine why all of the Man’s visions of the future would involve a pregnancy when he literally just learned he married his sister, but in any event. The true ending is when Man finally resigns himself to abandoning his pregnant wife and agrees to allowing the father to hypnotize him and erase these memories while staring at the clock on the wall.

Spoilers for 12 Minutes and CrimsonBehelit's comment

I think this interpretation of the game works well enough. It doesn’t make the plotting of the story better, but it prevents it from having a very obvious hole.

If I assume this to indeed be correct, then my focus shifts to the (day)dream scenario about something the protagonist is confronted with and has to digest, being very cheap imo and considering the true ending, all I can now imagine in my head, are the writers leaning over a picture of the protagonist and the wife, whispering with a creepy smile on their face: “Oh, you tragic things”.
What a game. A true deep dive into the tragedy of love. :unamused:

I want to believe that there must have been a really weird turn at some point during development.

1 Like