Are Armed Drone Swarms A Weapon Of Mass Destruction?

How many armed flying robots does it take to equal a weapon of mass destruction?

The question is both rhetorical and theoretical. While armed autonomous drone swarms do not yet exist, the spectre of masses of flying robots seeking and finding human targets was the subject of “Slaughtbots,” an activist short film released in 2017. It’s also an entirely possible end-goal from the conflux of a host of drone technologies, like cheap airframes, automated targeting, swarming, and explosive payloads, each separately in development.

So while the technology of Slaughterbots might not be ready for deployment any time soon, figuring out the possible implications of such a technology are worth doing before nations and militaries fully develop them.

“Armed, fully autonomous drone swarms should be classified as WMD because of their degree of potential harm and inherent inability to differentiate between military and civilian targets—both of which are characteristics of existing weapons categorized as WMD,” writes Zachary Kallenborn, a Senior Consultant at ABS Group. 

Kallenborn’s argument is that these swarms present a technical challenge, limited military utility, and unacceptable risk, and should be regulated by norms and laws preemptively. Kallenborn makes this case in concise form by the Modern War Institute at West Point and in a longer monograph published by the U.S. Air Force Center for Strategic Deterrence Studies in May. 

Managing remotely piloted drones like the Reapers used today is a labor intensive process, with pilots and sensor operators and analysts all tasked with guiding the robot and directing its lethal force. Even with that apparatus in place, drone strikes are capable of producing deadly errors that kill civilians incorrectly identified as combatants. 

Introducing autonomy into the mix, which on scale alone a drone swarm must, means turning to code instead of human judgement for those decisions.

For example, the weapon needs to recognize whether a target is carrying a rifle or a rake. Accurate assessment depends heavily on image resolution and target visibility,” writes Kallenborn. “Obscuring conditions such as rain, snow, or shadows may prevent civilian-military discriminators from being perceived accurately. [Armed Fully Autonomous Drone Swarms] must also identify and evaluate context. Even if a target is holding a rifle, the person may be a farmer, not a soldier. In addition, guerrilla or unconventional forces blur the line between farmer and soldier, because they lack clear indicators of military affiliation. Even if a target is a soldier, the person may be considered a noncombatant due to illness or injury.”

While Kallenborn holds out the possibility that sensors and software may someday be able to handle that problem of target discrimination, that day is nowhere close, and the risk is great.

It is this risk, in part, that has animated much of the international organizing against lethal autonomous weapon systems. Swarms cannot help but be fully autonomous if they are to work at all, so one possible solution in the near-term if swarms are developed is to prevent them from being armed, leaving humans in the loop and in control over any lethal decisions.

“Drone swarm technology, particularly self-targeting, self-mobile drone swarms, poses a significant risk to global security. Failing to develop international norms, supported by robust policies, to prevent and counter AFADS emergence risks a less secure United States and a far more dangerous world,” Kallenborn concludes.

Leave a Reply

Related Post