Anthropic says it will not accede to Pentagon demands as deadline looms

0 comments

WASHINGTON (AP) — A dispute between the Trump administration and artificial intelligence company Anthropic has reached an impasse, with military officials demanding the company alter its ethical policies by Friday or risk losing business and facing potential restrictions.

Pentagon’s Ultimatum

Anthropic CEO Dario Amodei stated Thursday his company “cannot in good conscience accede” to the Pentagon’s request for unrestricted use of its technology. While Anthropic can absorb the loss of a defense contract, the ultimatum poses broader risks to the company as it rapidly grows.

Military officials have warned that if Amodei doesn’t comply, they will not only cancel Anthropic’s contract but also designate the company a “supply chain risk,” a label typically reserved for foreign adversaries that could jeopardize partnerships with other businesses.

Ethical Concerns and AI Safeguards

Anthropic has sought assurances from the Pentagon that its chatbot, Claude, won’t be used for mass surveillance of Americans or in fully autonomous weapons. However, the company said recent contract language, while presented as a compromise, would allow those safeguards to be disregarded.

Sean Parnell, the Pentagon’s top spokesman, stated that the military “will not let ANY company dictate the terms regarding how we make operational decisions,” and gave Anthropic until 5:01 p.m. ET on Friday to respond.

Emil Michael, the defense undersecretary for research and engineering, criticized Amodei on X, alleging he “has a God-complex” and is willing to risk national safety to control the U.S. Military.

Industry Support and Debate

Michael’s comments were largely met with opposition in Silicon Valley, where tech workers from OpenAI and Google voiced support for Amodei in an open letter. OpenAI, Google, and Elon Musk’s xAI also have contracts to supply AI models to the military.

Musk sided with the Trump administration, stating on X that “Anthropic hates Western Civilization” after Michael highlighted a previous version of Claude’s principles that encouraged consideration of non-Western perspectives. All leading AI models are programmed with instructions guiding their values and behavior.

The debate has polarized some, while others, including Republican and Democratic lawmakers and a former leader of the Defense Department’s AI initiatives, have expressed concern over the Pentagon’s approach.

Retired Air Force Gen. Jack Shanahan wrote that “painting a bullseye on Anthropic garners spicy headlines, but everyone loses in the end.”

Past Precedent and Current Use

Shanahan, who previously led Project Maven – a controversial AI project for drone footage analysis – noted that Claude is already widely used across the government, including in classified settings, and that Anthropic’s red lines are “reasonable.” He also stated that large language models powering chatbots like Claude are “not ready for prime time in national security settings,” particularly for autonomous weapons.

Parnell asserted the Pentagon wants to “use Anthropic’s model for all lawful purposes” and believes opening up use of the technology would prevent the company from “jeopardizing critical military operations.” Officials have not detailed specific use cases.

Defense Production Act and Potential Outcomes

Military officials warned they could designate Anthropic as a supply chain risk, cancel its contract, or invoke the Defense Production Act to gain more authority over the company’s products. Amodei said these threats are contradictory, labeling Anthropic both a security risk and essential to national security. He stated Anthropic will work to ensure a smooth transition to another provider if necessary.


Discover more from Archyworldys

Subscribe to get the latest posts sent to your email.

You may also like