Autonomy is coming to warfare, and some would say it’s already here. Weapon systems driven by artificial intelligence (AI) algorithms will soon be making potentially deadly decisions on the battlefield. This transition is not theoretical. The immense capability of large numbers of autonomous systems represents a revolution in warfare that no country can ignore. As we march towards this reality, it is important that technology leaders and military strategists begin a discussion around the moral and legal framework within which such autonomous capabilities will be enabled.
THE THIRD OFFSET
Deputy Secretary of Defense Bob Work has spent the last year describing the Department of Defense’s “Third Offset Strategy,” which seeks to revitalize the United States’ strategic supremacy. Our “Offset Strategy” is our nation’s competitive advantage for both military might and peacekeeping. However, this advantage is currently under threat as other countries improve their own forces and technologies.
These threats have eroded the advantages of our previous “Second Offset Strategy”—the United States’ mastery of conventional, precise smart munitions. The best demonstration of the power of smart weaponry was the Gulf War, where a battle-hardened Iraqi Army was unable to respond to the swiftness and precision of new U.S. munitions, such as the JDAM ( Joint Direct Attack Munition) and GPS-guided, terrain-following systems like the Tomahawk. But the U.S. is no longer the only country with precision weaponry. In fact, Russia, China and countries purchasing defense equipment from them now have access to these Second Offset technologies. With sources of comparable technology growing elsewhere in the world, such as the anti-ship Brahmos hypersonic cruise missile developed by Russia and funded by India, the U.S. needs to pivot to new technologies. In this context, Deputy Secretary Work’s Third Offset push, with its focus on rapid prototyping and AI, is particularly relevant.
With sources of comparable technology growing elsewhere in the world, such as the anti-ship Brahmos Hypersonic cruise missile developed by Russia and funded by India, the U.S. needs to pivot to new technologies. In this context, deputy secretary work’s third offset push, with its focus on rapid prototyping and AI, is particularly relevant.
If autonomous systems are to be a pillar of future supremacy, then now is the right time to present a framework within which autonomy can be enabled in an effective and technically viable, yet legal and moral, manner.
THE IMPORTANCE OF MORALITY IN AUTONOMOUS TECHNOLOGY
To understand how the question of morality relates to autonomous technology, one only has to read through the many articles questioning the use of weapons that decide on their own targets. The questions range from the legality of this practice to fear of a Terminator-inspired “Skynet.” In truth, the discussion regarding “moral” autonomous action is not academic— it’s critical.
Since the prospect of full machine autonomy—overall range of action, including deadly response—is disconcerting to many, public debate on this topic is infused with softeners. These are comforting terms like “semiautonomous” and “human in the loop.” However, these represent an easy out while also being misleading. They masquerade as answers when they don’t even begin to address the question.
Effective machine functionality in a variety of situations requires full autonomy, and a wink and a nod to a “man in the loop” is actually detrimental to properly confronting and addressing this need. For example, how do we expect a swarm of autonomous undersea vehicles to act when they have a critical target in sight but realize that communications are being jammed? Do they let the threat materialize since they can’t contact their human commanders? Or do they take autonomous action for our protection?
FULL AUTONOMY IS IMPORTANT, BUT NEEDS TO BE EXPLAINABLE
With all of these complications, why even go down the path of full autonomy? The answer is simple: military superiority and survivability. Autonomy grants an edge. The full potential of autonomous systems cannot be realized if there are humans in the loop for all key decisions.
The First Offset was about massive firepower delivered bluntly and coordinated over a modest window of time. The Second Offset was about modest firepower, delivered with precision and coordinated over a longer window of time. The Third Offset will be about micro firepower, delivered at unimaginable scale with immense precision on the actual target, and coordinated over a minute window of time. It will be about instantaneous, massive, surgically precise strikes. However, without fullspectrum autonomy, you lose several of these attributes. Are we sure that our competitors will compromise on these to remain in control of their own AI systems? That said, it’s important to make autonomous decision-making as transparent as possible, particularly in these cases. AI systems that have succeeded in today’s commercial world don’t make this easy. Artificial Neural Networks (ANNs), Deep Learning and other approaches that leverage vast networks of statistical weights learn patterns and behaviors that are inherently uncheckable. Much like how an autopsy on a biological brain does not reveal the memories or experiences of the individual, taking a digital “scalpel” to a neural network reveals nothing but arrays of millions of numbers, none of which have a human-interpretable label or any clue regarding what behavior or feature the numbers represent.
In order to improve behaviors, societies or systems, one must get to the root of the cause. Even ancient criminal law prescribed different outcomes for the same action given a different motivation: hang him if he killed in anger, but let him go if he killed in self-defense. Many successful AI techniques lack this level of transparency or explainability. They propose a decision, but cannot fully explain why it’s the right decision or what their “motivation” was.
Policy and defense experts, such as P.W. Singer, are raising important questions about building a framework for moral use before militaries start using these systems. My own company, through natural language generation and algorithmic ensembling, is working on related explanations for why something is a threat. We, as technologists and military strategists, must understand why a system proposes to do what it does, so that we may optimize and improve it. As AI capabilities rapidly head in a direction where autonomous systems can make life and death decisions, it is time to demand explainability and accountability from the digital brains we are building.
by Amir Husain and General John Allen (Ret.)
Last modified: November 2, 2017