The Pentagon bought into AI, and now it wants the keys. According to CBS News, Defense Secretary Pete Hegseth gave Anthropic’s Dario Amodei a hard deadline to sign over full access to Claude, with talk of the Defense Production Act and a possible government blacklist swirling behind the scenes.

What You Should Know

CBS News reported that the Pentagon demanded a signed document granting full access to Anthropic’s Claude model by 5 p.m. on February 27th, 2026. Officials also discussed using the Defense Production Act or labeling Anthropic a supply chain risk.

The dispute centers on who controls what after a contract gets signed. The Defense Department wants broad, classified-use access to Claude for military operations, while Anthropic has pushed for guardrails that limit certain uses, including surveillance and fully autonomous targeting.

Inside the Deadline and the Leverage

CBS News reported the pressure campaign came to a head in a February 24th, 2026, meeting at the Pentagon, where Hegseth told Amodei to deliver a signed document granting full access by the end of the week. The deadline was set for 5 p.m. on February 27th, 2026, per CBS.

After Amodei left, officials discussed whether to invoke the Defense Production Act, the Cold War-era law that lets the government steer private industry in the name of national defense. According to CBS News, another option floated inside the building was to designate Anthropic a supply chain risk, a label that could effectively push it out of sensitive government work.

The Pentagon’s argument, as described by CBS News, is blunt: Once the government pays, the vendor cannot dictate how the product is used. Hegseth compared it to Boeing and military aircraft purchases, and a senior official also pointed to other AI players, including Elon Musk’s xAI product Grok, as being willing to operate in classified settings.

Why Anthropic Is Pushing Guardrails

Anthropic’s position is about limits, liability, and the nightmare scenario in which a model’s mistake becomes lethal. Sources told CBS News the company has pressed the Defense Department to agree that Claude will not be used for mass surveillance of Americans, and that it will not be used for final targeting decisions without human involvement.

That sets up the core contradiction. Pentagon officials argue that mass surveillance is illegal and say they are seeking only lawful use, while Anthropic is effectively asking for written assurances that the model will not be used for precisely the kinds of missions that provoke public blowback and congressional scrutiny.

One senior Pentagon official put it this way in a statement to CBS News: “This has nothing to do with mass surveillance and autonomous weapons being used. The Pentagon has only given out lawful orders.” Anthropic, for its part, said it was continuing “good-faith conversations” about usage policies tied to what its models can do responsibly.

Now the stakes are bigger than one contract. CBS News reported Anthropic was the first tech company authorized to work on the military’s classified networks, which makes any supply chain risk designation more than bureaucratic shade. It would be an escalation that tells other contractors how the Pentagon plans to handle AI vendors that try to negotiate boundaries.

Watch the paperwork, not the rhetoric. If a signed full-access document appears by the deadline, the Pentagon gets the control it wants, and Anthropic gets to stay in the room. If not, the government has already sketched out its next moves.

References

Sign Up for Our Newsletters

Keep Up To Date on the latest political drama. Sign Up Free For National Circus.