Google joins the list of AI companies that now allow the US Department of Defense to access its cutting-edge AI technology for “any lawful government purposes”. The development comes a day after several Google employees wrote a letter to its CEO, Sundar Pichai, urging the management to call of any deal that witnesses AI being used for defence purposes. Google, it seems, decided to go ahead with the deal, keeping commercial interests in the spotlight.
Is Google furnishing a similar deal as Anthropic?
As part of the agreement with the Department of Defense, the Pentagon has been granted access to Google AI models, including Gemini, for use on classified networks and for “any lawful government purpose.” The decision positions Google alongside OpenAI and Elon Musk’s xAI – AI companies that have secured similar deals earlier this year after Dario Amodei’s Anthropic refused to remove key safety restrictions.
With the rise of AI in most sectors, the US Department of Defense has been increasingly relying on AI models to help with “lawful activities” that serve the country’s national interests. The Pentagon says that integrating frontier AI models can help with mission planning, intelligence analysis, logistics, and other sensitive operations on classified systems.
Google’s announcement mentions that it does not intend for its AI models to power domestic mass surveillance or fully autonomous weapons. However, there’s no word on how these restrictions shall be imposed by Google, if the Pentagon decides to utilise it for its vested interests. Google’s language echoes the OpenAI contract, which did not clarify the AI company’s stand on AI usage.
– prohibiting the use of its Claude models for domestic mass surveillance of American citizens
– AI for lethal autonomous weapons systems that could select and engage targets without meaningful human oversight.
When Anthropic refused to drop these restrictions for “all lawful purposes,” the Pentagon, under Defense Secretary Pete Hegseth, designated the company a “supply-chain risk” to national security in March 2026 – a label typically reserved for adversarial foreign entities.
This designation triggered widespread restrictions on military contractors doing business with Anthropic and prompted a lawsuit from the AI firm. A judge later issued an injunction blocking the designation, allowing the legal challenge to proceed.
Anthropic publicly positioned its stance as a matter of principle. Amodei stated that the company could not “in good conscience” accede to the Pentagon’s demands, and that they would prefer to forgo the relationship rather than risk undermining democratic values.
However, within weeks of Anthropic releasing Claude Mythos – its most capable AI model – there were reports of talks resuming between the Pentagon and the AI firm, with Anthropic committing itself to US’ national security. US President Donald Trump had also acknowledged a meeting with Anthropic officials, and while he didn’t confirm any deal, hints were dropped of progressing talks.
OpenAI reached an agreement with the Pentagon shortly after the designation, announcing that it would deploy its models on classified networks while ensuring implementation of technical guardrails aligned with its safety principles. The deal allowed “any lawful government purpose” but included language barring certain prohibited uses, similar to the causes that Anthropic had been fighting.
Elon Musk’s xAI followed closely, signing its own deal around mid-March 2026. xAI, which has positioned Grok as a “maximum truth-seeking” alternative to Musk’s description of “woke AI”, embraced the Pentagon’s terms more readily. The integration of Grok into military systems, along with the inclusion of the GenAI.mil platform serving millions of personnel, was hailed as supporting both administrative efficiency and critical mission needs.
Reports indicate the Pentagon experimented with OpenAI models via Microsoft’s platforms even before OpenAI formally lifted earlier restrictions on military use. Microsoft also maintains massive standalone defense contracts, including long-running cloud and augmented reality work, and has invested in Anthropic as well.
Last year, the controversy around Microsoft’s involvement in the Gaza war centred on allegations that its Azure cloud and AI services helped the Israeli military conduct mass surveillance and AI‑driven targeting in Gaza. Investigative reports suggested that the Israeli military used Microsoft Azure to store and process huge volumes of intercepted phone calls and location data from Palestinians in Gaza and the West Bank, often obtained through broad “mass surveillance.” Critics, including human‑rights and Palestinian‑rights groups, argue that this Azure‑powered infrastructure enabled AI‑based targeting systems that rapidly generate potential strike targets, raising concerns about civilian casualties and potential complicity in war crimes.
Microsoft has publicly stated that its terms prohibit using its technology for mass surveillance of civilians and that it found no clear evidence its Azure or AI tools were directly used to “target or harm” people in Gaza. However, Microsoft also acknowledges that it has limited visibility once data leaves its systems.
With the rise of AI in most sectors, the US Department of Defense has been increasingly relying on AI models to help with “lawful activities” that serve the country’s national interests. The Pentagon says that integrating frontier AI models can help with mission planning, intelligence analysis, logistics, and other sensitive operations on classified systems.
Google’s announcement mentions that it does not intend for its AI models to power domestic mass surveillance or fully autonomous weapons. However, there’s no word on how these restrictions shall be imposed by Google, if the Pentagon decides to utilise it for its vested interests. Google’s language echoes the OpenAI contract, which did not clarify the AI company’s stand on AI usage.
ALSO READ
YouTube “Ask YouTube” AI Feature: Smarter Video Search with Interactive Answers
A Google spokesperson described the approach as responsible, “providing API access to our commercial models, including on Google infrastructure, with industry-standard practices and terms.” The company has faced internal resistance, with hundreds of employees signing an open letter to Pichai for enforcing strict safety guardrails similar to those Anthropic attempted to maintain with its Claude models.
Anthropic’s ‘supply-chain risk’ backlash
The US Department of Defense had been working with Anthropic for months to test its AI systems in the defense network and other applications. However, CEO Dario Amodei drew a firm line during contract negotiations on the wake of the news hinting that Anthropic’s Claude had been used in the US military operations in Venezuela. The company insisted on two core safeguard revisions– prohibiting the use of its Claude models for domestic mass surveillance of American citizens
– AI for lethal autonomous weapons systems that could select and engage targets without meaningful human oversight.
When Anthropic refused to drop these restrictions for “all lawful purposes,” the Pentagon, under Defense Secretary Pete Hegseth, designated the company a “supply-chain risk” to national security in March 2026 – a label typically reserved for adversarial foreign entities.
This designation triggered widespread restrictions on military contractors doing business with Anthropic and prompted a lawsuit from the AI firm. A judge later issued an injunction blocking the designation, allowing the legal challenge to proceed.
Anthropic publicly positioned its stance as a matter of principle. Amodei stated that the company could not “in good conscience” accede to the Pentagon’s demands, and that they would prefer to forgo the relationship rather than risk undermining democratic values.
However, within weeks of Anthropic releasing Claude Mythos – its most capable AI model – there were reports of talks resuming between the Pentagon and the AI firm, with Anthropic committing itself to US’ national security. US President Donald Trump had also acknowledged a meeting with Anthropic officials, and while he didn’t confirm any deal, hints were dropped of progressing talks.
OpenAI and xAI rapidly capitalised on Anthropic’s situation
Anthropic’s refusal had not only affected the AI firm’s business proposition but also created an opening that competitors quickly filled.OpenAI reached an agreement with the Pentagon shortly after the designation, announcing that it would deploy its models on classified networks while ensuring implementation of technical guardrails aligned with its safety principles. The deal allowed “any lawful government purpose” but included language barring certain prohibited uses, similar to the causes that Anthropic had been fighting.
Elon Musk’s xAI followed closely, signing its own deal around mid-March 2026. xAI, which has positioned Grok as a “maximum truth-seeking” alternative to Musk’s description of “woke AI”, embraced the Pentagon’s terms more readily. The integration of Grok into military systems, along with the inclusion of the GenAI.mil platform serving millions of personnel, was hailed as supporting both administrative efficiency and critical mission needs.
Microsoft hasn’t been alien to this
Microsoft has also played a pivotal behind-the-scenes role throughout. As OpenAI’s largest investor and exclusive cloud provider since 2019, Microsoft has enabled broader deployment of OpenAI technology through its Azure Government and high-security IL6 environments, which support classified operations.Reports indicate the Pentagon experimented with OpenAI models via Microsoft’s platforms even before OpenAI formally lifted earlier restrictions on military use. Microsoft also maintains massive standalone defense contracts, including long-running cloud and augmented reality work, and has invested in Anthropic as well.
Last year, the controversy around Microsoft’s involvement in the Gaza war centred on allegations that its Azure cloud and AI services helped the Israeli military conduct mass surveillance and AI‑driven targeting in Gaza. Investigative reports suggested that the Israeli military used Microsoft Azure to store and process huge volumes of intercepted phone calls and location data from Palestinians in Gaza and the West Bank, often obtained through broad “mass surveillance.” Critics, including human‑rights and Palestinian‑rights groups, argue that this Azure‑powered infrastructure enabled AI‑based targeting systems that rapidly generate potential strike targets, raising concerns about civilian casualties and potential complicity in war crimes.
Microsoft has publicly stated that its terms prohibit using its technology for mass surveillance of civilians and that it found no clear evidence its Azure or AI tools were directly used to “target or harm” people in Gaza. However, Microsoft also acknowledges that it has limited visibility once data leaves its systems.
ALSO READ
In September 2025, amid mounting pressure, Microsoft disabled certain Azure and AI services for a specific Israeli military unit (reportedly parts of Unit 8200) over concerns that those services were being used for mass surveillance. The company’s role has also sparked internal dissent and external campaigns accusing Microsoft of war‑crime complicity and profiteering from Gaza‑related military operations.

0 Comments