Drones are conducting missions using linked digital information, often with unpredictable collateral damage. Stuart Russell has been researching and teaching at the University of California, Berkeley since 1986. Highly awarded, the native Brit is considered one of the leading scientists in the field of Artificial Intelligence.
The textbook “Artificial Intelligence: A Modern Approach,” co-authored by Russell and Peter Norvig, is required reading at more than 1500 universities worldwide. He warns about the risks associated with the development of general Artificial Intelligence. Additionally, he is a leading voice in the international campaign for a ban on “Lethal Autonomous Weapon Systems,” comparing autonomous weapons to nuclear bombs.
MIT Technology Review (TR): AI is already being used to make weapon systems more efficient. Isn’t that a more immediate threat than the danger of a super-powerful AI, which you often warn about?
Stuart Russell: AI in weapon systems is a huge problem, especially because of the autonomy aspect. A fully autonomous weapon system can identify its targets independently. This means a single person could deploy a million weapons against an entire population. These weapons have a similar destructive power to nuclear bombs but are much cheaper to produce and difficult to monitor. Simple models are already available worldwide. It’s almost like deciding to give up control over nuclear weapons and offer them in every supermarket.
TR: Why do you compare autonomous weapon systems to nuclear weapons?
Russell: Because they endanger hundreds of thousands of lives and can fall into the hands of actors who do not adhere to any civil norms or international agreements. Think of terrorist groups like ISIS or rebels in countries with ongoing civil wars, or rogue states like North Korea.
TR: Many people immediately think of “killer robots” when it comes to autonomous weapons. Is that justified?
Russell: If you mean human-like metal beings on two legs, like in the movie Terminator, no. That’s science fiction. It’s unlikely that the development of autonomous weapons will go in that direction. In most cases, we’re talking about aircraft: drones like the Israeli Harpy system, or small quadcopters like the STM Kargu from Turkey or the Chinese Blowfish models, which can drop bombs.
TR: What is the advantage of such aircraft?
Russell: Soldiers have arms and legs because that’s how humans are. If we could fly like swallows or mosquito swarms, we would be much more effective in combat. In the air, you can be more agile and faster, moving through the third dimension and making it harder for attackers to hit you. These are just some reasons why the development of autonomous weapon systems focuses particularly on small flying objects.
TR: Supporters argue that autonomous weapon systems can save lives because machines will fight against machines in future wars.
Russell: Such arguments are not convincing. One often hears: “Then our soldiers won’t have to put themselves in danger.” But that’s only true if we are the only ones with such weapons. The war in Ukraine shows what happens when both sides are similarly equipped: Russians and Ukrainians have remote-controlled weapons, and the number of deaths is enormous due to direct attacks and because drones help increase artillery accuracy. High-tech in war does not reduce the number of casualties.
TR: We haven’t seen killer drones and other weapons that find their targets themselves, even though it should be technically possible by now.
Russell: Several factors come into play. One is moral concerns: At least in the West, it’s not easy to find highly qualified scientists willing to work on autonomous weapon systems. Even in the military, there are significant reservations. All military members I’ve spoken to are aware of their moral responsibility when it comes to killing people. Many are in a real dilemma, and some categorically exclude the use of autonomous weapons.
TR: Are moral concerns really that strong?
Russell: The political debate also plays a role. In most countries, public opinion is clear: People do not want such weapons developed and deployed. Perhaps that’s why governments have become more cautious in pushing the issue, which is important because they are usually the ones who commission specific systems as customers.
TR: Efforts for a UN agreement to regulate autonomous weapon systems have made little progress since 2013. Do you still have hope?
Russell: I’m cautiously optimistic that an agreement can be reached. Perhaps not in Geneva, because the rules there require unanimity. But at the General Assembly in New York, a simple majority of votes is often sufficient to pass a resolution.
TR: What would be gained by a ban? You emphasize that a significant danger comes from non-state actors.
Russell: A group like ISIS probably wouldn’t adhere to a ban, but if commercial production of autonomous weapon systems could be prohibited, such groups would have to build their killer drones themselves. This would make large-scale attacks much more difficult because it’s much more complicated to produce millions of deadly weapons on your own.
TR: Commercial technology is becoming smaller, cheaper, and more powerful. How difficult is it to convert it for military purposes?
Russell: The spread of AI cannot be prevented. A ban on autonomous weapon systems alone is not enough. We should limit how many remote-controlled weapon systems someone can buy and ensure technically that converting them into autonomous systems is nearly impossible.
TR: How would that work?
Russell: The weapons could have physically separate components: on one side, the camera, image recognition, signal processing, and on the other, the trigger mechanism for the weapon, so the onboard computer has no way to fire independently. This would already help. And one could argue that in the future, all remote-controlled weapon systems should have application-specific integrated circuits (ASICs) that are permanently installed and cannot be reprogrammed.
TR: With nuclear bombs, it took the shock of Hiroshima and Nagasaki to show the world the destructive power of these new weapons. Do you fear that a catastrophe must occur with killer drones and similar systems?
Russell: I hope we can prevent a humanitarian tragedy. Similar to what was achieved with laser weapons, where it was recognized that the consequences were too severe. Thousands of soldiers could have been blinded, and it was decided: “This doesn’t have to happen.” It is possible; the big, open question is whether we will succeed in preventing the worst in this case as well.