WASHINGTON: Google’s withdrawal from Project Maven, which applied artificial intelligence to military intelligence, was dangerous and naïve, the former deputy secretary of defense said. While using AI to analyze surveillance video might help the US find terrorists and kill them, Bob Work acknowledged, it could also save lives by preventing terrorist attacks or errant strikes by US forces.
What’s more, Work told the DefenseOne technology conference at the Newseum here, despite its ethical objections to helping the Pentagon, Google indirectly and inadvertently assists the Chinese military, which has tentacles into the tech giant’s ventures in China.
Google has had a conflicted history in China, where it stopped doing business for years over government censorship. But today the company is standing up its new Google AI China Center in Beijing with “several hundred China-based engineers.”
Outpacing the US in artificial intelligence by 2025 is a national priority for the Chinese government. To reach this and other technology goals, Beijing has explicitly adopted a strategy of “civil-military fusion”: The People’s Liberation Army has relationships with every major company and university, and it’s a patriotic duty to provide information to the PLA. (There’s also China’s massive covert effort to steal trade secrets from foreign companies). So, Work said of Google’s new venture in Beijing, “anything that’s going on in that center is going to be used by the military.”
Yet over 3,000 of Google’s 70,000 employees signed a letter saying “Google should not be in the business of war” and should promise that “neither Google nor its contractors will ever build warfare technology.” The petitioners successfully urged the company to withdraw from its Project Maven contract with the US military. (Ironically, other Google employees are still pitching products to Special Operations).
Ironically, Work said, Maven was “what we considered to be the absolutely least objectionable thing” they could ask Google to do: “teaching AI to look for things on video.”
The proliferation of drones and digital cameras has loaded the military down with vast video databases of which its human analysts can only plow through a tiny fraction. Just one sensor system, ominously named Gorgon Stare, kept 21 analysts busy full-time looking at just 15 percent of the video it produced. Sorting through such vast datasets and highlighting potential anomalies for investigation by humans is one of the primary problems being tackled by modern machine learning and artificial intelligence.
The Google petitioners’ objection was “you might use that data to take life,” Work said. “I fully agree that it might end up with us taking a shot, but it could easily save lives….It might save 500 Americans or 500 allies or 500 innocent civilians from being attacked.
“The Google employees have created an enormous moral hazard to themselves,” Work said. He hopes that “it’s not a canary in the coal mine” and other tech companies don’t follow suit.
To date, however, the main barrier to doing business with the Defense Department isn’t ethics but bureaucracy. Except for a small number of defense contractors, most companies see the Pentagon as too small and low-margin a market compared to US civilian consumers — let alone Chinese ones — to justify the effort of complying with its complex regulations and congressionally imposed restrictions. Of the top 100 AI companies in the world, only two do business with DOD, said Eric Gillespie, founder of big data company Govini. (Work’s on the board).
“The two worst words you can have in a business plan, when you’re raising money from tier one venture investors in Silicon Valley, are ‘government customers,’ Gillespie said. “That is the kiss of death.”