Your Cart

AI for Good in War; Beyond Google’s ‘Don’t Be Evil:’

Posted by Larry Lewis on

Sorry, real robots won't be this cute and friendly.

Just one, albeit fictional, example of a benevolent AI.




In recent weeks, two events demonstrated the promise of and concern over the growing use of artificial intelligence (AI). At the #AI4Good Summit in Geneva, attendees reviewed the many ways AI can help humanity in medicine, education, economic and law enforcement applications, to name a few.

Meanwhile, Google withdrew from a Pentagon project called Project Maven, using AI to scan video from drones and make suggestions to classify objects as people, buildings, or vehicles. This follows the resignation of about a dozen Google employees and a petition signed by about 4,000 employees urging Google’s leadership to stop work on Project Maven and cease any support for “warfare technology.” The petition justified these demands by citing the Google slogan, “don’t be evil.”

Larry Lewis

Is it possible to deploy AI on the battlefield and still live up to the “don’t be evil” standard? In fact, is it not possible that AI holds promise for saving lives, in war just as in medicine?

The concerns Google employees raise are real: those who develop technology used for war should exercise great care to do so in an ethical and responsible way. But there is also another side to technology and warfare. While the history of warfare is replete with examples of technology being used to kill and maim more people more efficiently, technology can also reduce those tragic costs of war. For example, precision-guided and small-sized munitions can limit so-called collateral damage, the killing and maiming of civilians and other non-combatants. They can also reduce destruction of home and critical infrastructure. Even though many civilians encounter the humanitarian tolls of war, there is no public conversation on how applying artificial intelligence to waging war could help ease its tragedies. 

With the rapid advances in AI seen today, it’s worthwhile to think more deeply about how to use the technology to reduce the humanitarian toll of warfare. “General AI” – a sentient AI that can make broad decisions like a human, akin to the HAL-9000 in 2001: A Space Odyssey – is likely decades away, or longer. More targeted types of AI available now can be useful because of their ability to process large data sets and rapidly integrate data from disparate sources. Thinking creatively, there are ways that AI can improve decision-making to better protect civilians in armed conflict. 

CNA has studied civilian casualties extensively, examining how they occur. Given the current capabilities and processes used by modern militaries and the types of mistakes that happen with human-driven judgment and processes, AI could reduce civilian casualties in war. AI could review imagery and other intelligence to reduce the number of civilians mistakenly identified as combatants. They could monitor areas and provide a double check of existing collateral damage estimates, particularly as things can change over time. AI-driven unmanned systems would allow those systems to take on risk and use tactical patience, which can reduce risk to civilians. They could detect risk to civilian infrastructure in conflict areas, and recommend steps to reduce that risk, reducing long-term negative effects like loss of power and water.   

Applying AI for positive outcomes in war does not require turning a blind eye to potential risks. The effective and safe use of technology is complex and needs to be addressed in a comprehensive way, not just by fielding a new, more advanced system. Rather, there are specific ways to improve the safety of AI in military systems, such as leveraging the strengths of man and machine through human-machine teaming. Such machine-human teaming often yields better outcomes than either can achieve operating separately. But AI could save lives in war. This promise could be realized if governments choose to have their militaries pursue humanitarian gains from prudent use of AI, if international forums make such positive outcomes a collective goal, and if open society advocates for such goals. Companies like Google could also go beyond their commitment to not “be evil” and pursue good, examining ways they could contribute their specific expertise to humanitarian goals in war, just as many groups – like the International Committee of the Red Cross – have worked for humanity in war for many years. Many seek to use AI for the good of the world; leveraging AI to protect civilians on the battlefield is another, thus far untapped, way to pursue the use of AI for good.

Larry Lewis, director of the Center for Autonomy and Artificial Intelligence at CNA, is an expert on lethal autonomy, reducing civilian casualties, identifying lessons from current operations, security assistance and counterterrorism.

What Others Are Reading Right Now