Your Cart

Allies Must Develop Own Robots, Not Just ‘Copy’ US: Aussie War College Chief

Posted by Sydney J. Freedberg Jr. on

Army photo

In an international experiment, US Army engineers in Warren, Mich. remotely drove this Jeep Wrangler at the Woomera Test Range in South Australia.

CENTER FOR STRATEGIC & BUDGETARY ASSESSMENTS: America’s allies must make their own investments in military robotics, not just piggyback on the Pentagon, the two-star chief of the Australian Defense College said here.

Australian Defence Foce photo

Maj. Gen. Mick Ryan

Sure, the US and its allies should work together on Artificial Intelligence, sharing research and conducting joint experiments, said Maj. Gen. Mick Ryan, but each partner should be contributing something to the collaboration, “not just copying what the US is doing.

“Let’s bring something to the table,” Ryan said. “I think what we offer is, because we are at a different scale and have different responsibilities to the US Army — which is a very, very close partner for us — we can come up with different ideas and different concepts.”

Building such a “sovereign capability” is particularly important for Australia, Ryan argued, which is a long way from Silicon Valley and — though he didn’t quite say this last bit aloud — from US reinforcements in time of war.

“If at some point in future, you need to expand your capability, expand your military,” he said, “Australia’s at the end of a very long line of industrial resupply, and we might want to have the capacity ourselves.”

Multiplier Effects

Australia also has a small population occupying a vast landmass in an even vaster ocean. It’s long used technology, superb training and its Anglo-American alliances to punch above its weight. Now robots offer a radically new way to multiply limited manpower.

“We have long distances to cover,” Ryan told me after his talk here at the Center for Strategic & Budgetary Assessments. “We’re already looking at a range of different Unmanned Aerial Vehicles to perform maritime surveillance.” (Specifically, the long-range MQ-4 Triton drone and the medium-range MQ-9 Reaper).

Army photo

A soldier holds a PD-100 mini-drone during the PACMAN-I experiment in Hawaii.

But surveillance drones are just the beginning. In the future, “each soldier might control a small fleet of ground and air systems,” Ryan writes in a new study, published through CSBA. “A highly capable and sustainable land combat battlegroup in 2030 may consist of as few as 250–300 human soldiers and several thousand robotic systems.” That’s roughly 10 robots per human.

The potential for information overload is very real, Ryan acknowledges. Turning soldiers from riflemen to robot wranglers will require fundamental changes in training, he said. It will also require careful development of technology and even ethics: The more autonomous the robots, the less human supervision they require, but the more they can do things we don’t want them to.

The stakes get even higher when you consider Artificial Intelligence software acting as an advisor or planner for human commanders, for example by analyzing vast amounts of intelligence data and making combat recommendations at high speed. Here, instead of humans trying to control robots, you have AI potentially controlling humans.

Fortunately, Ryan argues, academe and private industry are already hard at work on these complex problems. The military needs to work with them.

Marine Corps photo

US and Australian troops train together during Talisman Saber exercises

Private Industry, Public Debate

“The Group of Eight universities in Australia, which are the eight major universities, all have AI and robotics programs,” Ryan said. “Some of them are quite sophisticated — and we and our allies are collaborating with them.” The Australian mining industry also relies heavily on robotics, he said.

In his report, Ryan recommends that each nation invest in its own industry and academia. The idea is that different allies pursue multiple approaches to AI rather than cookie-cutter copying.

These individual national efforts should, in turn, be nested in a collaborative international effort “monitoring developments in robotics, AI, big data analytics, and human augmentation,” he writes, “potentially with the U.S. Army as the lead agency.”

Army photo

A soldier mans a robot-carried machinegun during the Army’s PACMAN-I experiment in Hawaii.

At the same time as they develop the technology, Ryan says, each nation needs a domestic debate on the ethics of artificial intelligence in warfare.

Robots are too important to leave to the roboticists. “We can’t allow technologists to lead development of these lethal systems,” he said, not “without better oversight” than we had for the development of social media. Nor, said Ryan, should we put off key decisions until wartime, when crisis conditions tend to short-circuit thoughtful debate.

“There might be others that deploy these lethal systems pretty quickly,” Ryan said (politely not naming Russia and China), “and I appreciate that as a driver, but I don’t think we should be doing that unless we have a consensus with our people and our elected representatives that we think that this is legal, ethical, and effective…. I don’t see this really as a military decision. I see this a political decision.”

But the decision needs to be a realistic one, Ryan added, not an effort to put the robotic genie back in the bottle. That won’t work any better than the British-led attempts to ban submarines in 1922 and 1930, the Hague moratorium on the use of aircraft in war in 1899 (four years before the Wright Brothers), or Pope Innocent’s edict against the crossbow in 1139.

“I’m not an advocate of a ban,” Ryan said. “I think that horse has bolted.”

What Others Are Reading Right Now