Judge blocks Pentagon’s punitive measures against ‘supply chain risk’ Anthropic

.

A Biden-appointed judge granted Anthropic a preliminary injunction against the Pentagon’s punitive measures against it over a feud regarding the usage of its artificial intelligence, Claude.

U.S. District Judge Rita Lin didn’t mince words in a 43-page ruling, condemning the Pentagon as “Orwellian” and claiming its actions against Anthropic were fully without basis. She gave the government one week to appeal, though she indicated the AI firm was likely to succeed on the merits of its lawsuit.

“Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government,” she wrote on Thursday.

Lin disputed the Pentagon’s argument that designating Anthropic a supply-chain risk was necessary on national security grounds, even arguing that the punitive measures were primarily due to the company’s hostile public approach to the Pentagon.

“These broad measures do not appear to be directed at the government’s stated national security interests,” she wrote. “The Department of War’s records show that it designated Anthropic as a supply chain risk because of its ‘hostile manner through the press.’”

“Punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation,” Lin added.

Her ruling is hardly surprising, as she expressed heavy skepticism toward the Pentagon at a Tuesday hearing.

Anthropic celebrated the ruling, saying it was essential in protecting the company and its services.

“We’re grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits,” an Anthropic spokesperson told the Washington Examiner. “While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI.”

The Pentagon has argued that Anthropic’s unwillingness to give it unfettered access constitutes a national security threat, putting servicemembers’ lives in danger through the possibility of being faulty in the line of combat.

“The worry is that Anthropic, instead of merely raising concerns and pushing back, will say we have a problem with what DoW is doing and will manipulate the software … so it doesn’t operate in the way DoW expects and wants it to,” Trump administration attorney Eric Hamilton said at the Tuesday hearing.

JUDGE SUGGESTS PENTAGON’S ‘SUPPLY-CHAIN RISK’ LABEL IS ‘PUNISHING’ ANTHROPIC

The U.S. military continues to utilize Claude, primarily through Palantir’s Maven Smart System. Despite the feud, the United States heavily relied on Anthropic for targeting Iran in Operation Epic Fury.

Claude allowed the U.S. to strike over 1,000 targets in the first 24 hours of operations, the Washington Post reported, identifying and prioritizing key command and control and military installations.

Related Content