Google has signed a classified contract with the Pentagon to deploy its artificial intelligence systems for sensitive military operations, the company confirmed as more than 600 of its own employees demanded the deal be scrapped.
The agreement grants the Defense Department access to Google's AI models for what the contract describes as "any lawful government purpose." That language positions Google alongside OpenAI and Elon Musk's xAI in supplying cutting-edge AI directly to classified military networks, where it will help with everything from battle planning to weapons targeting.
The Pentagon has been aggressively consolidating AI partnerships in 2025. The department signed contracts worth as much as $200 million each with Anthropic, OpenAI, and Google, marking a major push to embed commercial AI into military infrastructure while stripping away the safety guardrails these companies typically maintain for civilian users.
Google's deal requires the company to modify its AI safety filters at the Pentagon's request. The contract does explicitly prohibit the use of Google's systems for "domestic mass surveillance or autonomous weapons without appropriate human oversight," but includes a caveat: Google has no power to veto any lawful government decision about how the AI gets deployed operationally.
The company's statement to Reuters tried to soften the optics. "We believe that providing API access to our commercial models with industry-standard practices represents a responsible approach to supporting national security," a Google spokesperson said.
The internal rebellion was swift. On Monday, more than 600 Google workers issued an open letter to CEO Sundar Pichai opposing the Pentagon work. "We feel that our proximity to this technology creates a responsibility to highlight and prevent its most unethical and dangerous uses," they wrote, asking the executive to pull the company out of classified military work entirely.
The timing of Google's pivot is striking. Last year, Alphabet scrapped its longstanding ethical guardrail against developing AI for weapons and surveillance. That change in language sparked alarm on company message boards, with one engineer asking colleagues: "Are we the baddies?" Demis Hassabis, Google's AI chief, justified the shift by arguing that AI had become essential for national security.
Google's move follows a bruising experience with Anthropic, a rival AI startup. Earlier this year, Anthropic refused to strip safety restrictions from its Claude model for Pentagon use, so the Defense Department labeled Anthropic a supply-chain risk and froze it out of classified contracts. The message to Silicon Valley was unmistakable: play ball on AI militarization or lose the contract.
The Pentagon insists it has no interest in mass surveillance of Americans or fully autonomous lethal weapons. But it wants maximum flexibility for "any lawful use" of AI, leaving the door open to surveillance, autonomous targeting, and other applications Google employees fear could prove harmful.
This is not Google's first brush with military AI. In 2018, thousands of employees signed a letter opposing Google's involvement in Project Maven, a Pentagon contract using Google's AI to analyze drone surveillance footage. That employee uprising worked: Google declined to renew the contract after mounting internal pressure. Palantir, the controversial surveillance analytics firm, took over the work instead.
The Pentagon declined to comment on the Google deal.
Author James Rodriguez: "Google had a choice and chose the Pentagon paycheck over its own employees' ethics concerns. History suggests this won't end well for the company's culture."
Comments