There isn’t a thread for Patrick’s article so I’ll put my thoughts here since the podcast talks about it.
Anyone who works in tech needs to be working under the assumption that what they are working on is or will be used for military application at some point regardless of what industry you work in. To think otherwise is to be naive and to be trying to pass off responsibility. Anything you do is going to be either directly used or used through proxy.
Let’s use video games as an example. You help create the next Madden, guess who’s streaming Madden and using Madden as a recruitment tool at an event at the local high schools? The military. What do you think happens with code you write at a studio? You don’t own that code, your employer does and do you trust your employer to not sell off code if the government swings by and says “we’re impressed with your network engineering, would you be interested in a nice fat government contract?” The entertainment industry has been tied to the military and the glorification of hurting others since it’s inception and will continue to be. To think video games are immune to this is absurd.
I’m going to keep bringing it up till the day I die but Joseph Weizenbaum, one of the fathers of modern AI, said it best in 1985 in an interview with MIT (emphasis mine):
Q: Did you have these concerns when you were designing the banking system?
Not in the slightest. It was a very technical job, it was a very hard job, there were a number of very, very difficult problems., for example, to design a machine that would handle paper checks of various sizes, some of which might have been crumpled in a person’s pockets and so on, to handle those the way punch cards are handled in a punch card machine and so on. There were many very hard technical problems. It was a whale of a lot of fun attacking those hard problems, and it never occurred to me at the time that I was cooperating in a technological venture which had certain social side effects which I might come to regret. That never occurred to me; I was totally wrapped up in my identity as a professional, and besides, it was just too much fun.
[…]
Q: What about computers and the military?
The computer was of course born to the military, so to speak. In the United States, the first fully functioning computer was created in order to compute ballistic trajectories. And in England, to help decipher military codes, Carl Zuse built his computer in order to deal with mathematical problems which arise in the design of military aircraft.
[…]
It is also safe to say, it is simply a matter of fact, that to date weapons which threaten to wipe out the human species altogether could not be made and could certainly not be delivered with any sort of precision were it not for the computers which guide these weapons.
The computer is very deeply involved with the military. Today it counts as the beating heart of virtually every modern military system you can think of with the exception of the foot soldier.
[…]
Q: So to be a computer science professional very often means to be working in defense?
I would endorse that sentence, except that I would wish either that the last word be put in quotes, or that you change the sentence to read “…to be involved in the military.”
And you know, “the military” certainly is very considerably less euphemistic than to say “defense.” Now I understand that we’re threatened by great forces, like Grenada, Cuba, and Nicaragua, for example, and we have to defend ourselves against them, but the terminology “the military” still hides the reality.
When we think today, for example, of the masses of computers in helicopters, and in all sorts of mobile things like tanks and airplanes, and we think of the many places on earth where these machines are being used every day, whether it is in Afghanistan or someplace in Africa, then the term “the military” also deserves to be replaced with something considerably harsher.
Instead of saying the computer is involved with the military, say the computer is involved with killing people. It is only when you come to that vocabulary, I think, that the euphemism begins to disappear, and I think it’s very important that it disappear.
Q: How can people continue to do this, knowing that the things they build will be involved in killing people?
People have a series of rationalizations. People say for example that science and technology have their own logic, that they are in fact autonomous. This particular rationalization is profoundly false. It is not true that science marches on in defiance of human will, independent of human will, that just is not the case. But it is comfortable, as I said: it leads to the position that “if I don’t do it, someone else will.”
Of course if one takes that as an ethical principle then obviously it can serve as a license to do anything at all. “People will be murdered; if I don’t do it, someone else will.” (CW: Violence to women)“Women will be redacted; if I don’t do it, someone else will.” That is just a license for violence.
Other people say, and I think this is a widely used rationalization, that fundamentally the tools we work on are “mere” tools; This means that whether they get use for good or evil depends on the person who ultimately buys them and so on.
There’s nothing bad about working in computer vision, for example. Computer vision may very well some day be used to heal people who would otherwise die. Of course, it could also be used to guide missiles, cruise missiles for example, to their destination, and all that. You see, the technology itself is neutral and value-free and it just depends how one uses it. And besides – consistent with that – we can’t know, we scientists cannot know how it is going to be used. So therefore we have no responsibility.
Well, that is false. It is true that a computer, for example, can be used for good or evil. It is true that a helicopter can be used as a gunship and it can also be used to rescue people from a mountain pass. And if the question arises of how a specific device is going to be used, in what I call an abstract ideal society, then one might very well say one cannot know.
But we live in a concrete society, [and] with concrete social and historical circumstances and political realities in this society, it is perfectly obvious that when something like a computer is invented, then it is going to be adopted will be for military purposes. It follows from the concrete realities in which we live, it does not follow from pure logic. But we’re not living in an abstract society, we’re living in the society in which we in fact live.
If you look at the enormous fruits of human genius that mankind has developed in the last 50 years, atomic energy and rocketry and flying to the moon and coherent light, and it goes on and on and on – and then it turns out that every one of these triumphs is used primarily in military terms.
So it is not reasonable for a scientist or technologist to insist that he or she does not know – or can not know – how it is going to be used.
I want to go back to Patrick’s article and comments made in the podcast about the individual in AI who didn’t realize that their work was going to be used for killing people. It would be easy to ridicule this person about not being forward thinking about where AI leads regardless of intended application but I quite honestly think almost anyone working in tech has not really stopped to think about just what an impact their work has or will have.
The fact of the matter is everyone in tech is either directly or indirectly involved in technology that is ultimately used for killing other people.
It is also not just military and oppressive government forces. How many people would you say have died due to social media? How many people have died from information in an Excel workbook? How many people have died from a service that utilizes AWS? Is it right to say anyone involved in the Apache HTTP server project is ultimately responsible for all manners of vile and harmful websites hosted using it? Is it right to say that you invented a tool that was meant for good and it was others that took it and made it bad? Are you still not in some way responsible for the product of the tool you helped create?
This isn’t to say I think all of us in tech should just quit our jobs and take a vow to never touch a keyboard again. I think we do need however to be more aware of what we are doing to the world and be actually honest with ourselves about what it is we have impacted and have the potential to impact. I believe it to be incredibly unethical to take a stance of ignorance is bliss in tech or to be a denier of what you have contributed to at large.