Ruhani Kaur | Bloomberg | Getty Images
“I said, ‘I just want to see the contracts,'” Michael told the All-In Podcast on Friday, reflecting on his early days managing the AI portfolio. “You know, the old lawyer in me.”
Michael’s request kicked off a months-long review process that culminated in the Defense Department banning Anthropic’s technology, leaving the military without its hand-picked AI models to operate in the most sensitive environments. In an extraordinary move, the DOD designated Anthropic a supply chain risk, a label that’s historically only been applied to foreign adversaries. It will require defense vendors and contractors to certify that they don’t use the company’s models in their work with the Pentagon.
Anthropic sued the Trump administration on Monday, calling the government’s actions “unprecedented and unlawful,” and claiming that they are “harming Anthropic irreparably,” putting hundreds of millions of dollars worth of contracts in jeopardy.
The DOD’s sudden reversal came as a shock to many officials in Washington who viewed Anthropic’s models as superior — they were the first to be deployed in the agency’s classified networks — and championed the company’s ability to integrate with existing defense contractors like Palantir. The decision was all the more puzzling since the Trump administration had threatened during negotiations to invoke the Defense Production Act, which could have forced Anthropic to grant the military access to its technology.
“I don’t know how those two things can both be true in reality,” said Mark Dalton, a retired Navy rear admiral who now leads technology and cybersecurity policy at R Street, a think tank in Washington, D.C. “Something is so necessary that you need to invoke DPA and so harmful that you put a designation on it that’s reserved for foreign adversaries.”
Defense experts like Dalton expressed concern about the government’s decision. Not only does it set a troubling precedent, they argue, but it also means the administration is banishing a key technology vendor that’s been lauded for its diligence with respect to AI safety, tough rhetoric against China and its entrepreneurial chops, becoming one of the fastest-growing tech startups in the U.S.
“You’re not so excited if you’re in the military,” said Carson, who worked in President Obama’s Defense Department until 2016 and before that was deployed to Iraq while in the Army and also served two terms in Congress as a Democrat in Oklahoma. “They view Claude as being a better product, the most reliable, with the most user friendly outputs they can assimilate into planning.”
CNBC spoke to 17 AI policy experts, former Palantir and Anthropic employees, tech analysts and researchers about Anthropic’s critical role in the Defense Department and what comes next. Several of the people asked not to be named because they weren’t authorized to speak on the matter.
Anthropic CEO Dario Amodei founded the San Francisco-based company in 2021 alongside his sister, Daniela Amodei, and a handful of other researchers. The group had defected from OpenAI, before the launch of ChatGPT, over concerns about the company’s direction and attitude toward safety. They spent years carefully constructing Anthropic’s reputation as a firm that was more dedicated to responsible AI deployment.
Anthropic launched its family of AI models, known as Claude, in March 2023, a few months ChatGPT hit the market and quickly went viral. In the three years since introducing Claude, Anthropic has raised billions of dollars of capital, en route to a $380 billion valuation.
The company is now under immense pressure to justify that pricetag and has been forced to rapidly commercialize its technology in an effort to keep pace with OpenAI and other rivals like Google.
While OpenAI was enthralling consumers, Anhtropic found quick success selling to large enterprises, including the DOD. It’s an area Amodei started focusing on early, recognizing the business and societal importance of working closely with the government and military and helping to establish principals for safe uses of a technology that has the power to bring about potential catastrophes, according to people with knowledge of the matter.
The company began building relationships and making inroads with officials in Washington, D.C., and Amodei was among the few AI industry executives invited to meet with then-Vice President Kamala Harris, in May 2023.
Anthropic’s push into the federal government
The Anthropic logo appears on a smartphone screen with multiple Claude AI logos in the background. Following the release of Claude Opus 4.6 on February 5, Anthropic continues to challenge its main competitors in the generative AI market in Creteil, France, on February 6, 2026.
Samuel Boivin | Nurphoto | Getty Images
Around that same time, Anthropic turned to a familiar tech partner that could help it prosper among D.C. technologists: Amazon Web Services.
Anthropic’s Claude became available within AWS’ Bedrock service that year, which helped it gain traction within the government tech community, multiple sources said. Federal agencies could begin experimenting with Anthropic’s models because they were accessible within AWS’ government-sanctioned environment.
Amazon has been one of Anthropic’s largest financial backers since 2023, investing a total of $8 billion in the startup.
As pilot projects got underway, many federal employees found that Claude produced more compelling results than other models from companies like OpenAI and Meta, the sources said. Claude could provide step-by-step reasons for why it would derive an answer or complete a task, which was crucial for federal agencies that require strong auditing and verification, sources said.
And since Anthropic had prioritized building for enterprise customers, the company’s user experience was especially suitable for desktop computers, the people said.
With a strong AI model and an intuitive user interface, Anthropic began to earn credibility with federal workers, sowing the seeds for another major partnership with Palantir.
Palantir is a software and services provider that counts on government contracts for about 60% of its U.S. revenue. Its government work has been the subject of intense scrutiny over the years, most recently because its technology has helped the U.S. Immigration and Customs Enforcement track and deport immigrants.
In November of 2024, Anthropic and Palantir announced a partnership with AWS that would allow U.S. intelligence and defense agencies to access Claude.
Many Anthropic staffers were upset about the deal when it was announced, a former employee said. It prompted “many big Slack threads” and became a point of lingering tension within the company.
“The partnership between Anthropic and the DOD, while perhaps surprising to people not closely following Silicon Valley over the past few years, is actually a continuation of a shift that started with the growth of Palantir, followed more recently by Anduril,” said David Evan Harris, a Chancellor’s Public Scholar at UC Berkeley and tech policy expert. “It’s not accidental that both of those companies derive their names from the Lord of the Rings books—these companies and their leaders built upon popular fantasy fiction notions of epic battles between good and evil, where taking sides might seem to be the only option.”
Having Palantir as a partner helped Anthropic build direct lines with the DOD and fast tracked its integration into the highest-level, classified projects. The two companies could then take on confidential work on top of Amazon’s tailored cloud computing service, which was intended to underpin highly sensitive tasks, the sources said.
These partnerships were a crucial reason that Anthropic was the first to officially deploy its models across classified networks, said Lauren Kahn, a senior research analyst at Georgetown’s Center for Security and Emerging Technology.
“The fact that Anthropic is able to basically play nice with others like Palantir, AWS, Google, etc, specifically Palantir,” she said, “is extremely valuable.”
In its July 2025 release announcing its $200 million defense contract, Anthropic said it had “accelerated mission impact across U.S. defense workflows with partners like Palantir.” Anthropic said its technology helped the government’s defense and intelligence organizations “rapidly process and analyze vast amounts of complex data.”
A dispute about ‘politics and personalities’
U.S. President Donald Trump arrives in Miami, Florida, U.S., March 7, 2026.
Kevin Lamarque | Reuters
But by that point, Anthropic’s relationship with the government was beginning to sour.
President Trump had been sworn into office that January, and Amodei’s dislike for the commander in chief, who he once likened to a “feudal warlord” in a since deleted Facebook post, according to Fortune, was something of an open secret.
Other industry executives, including OpenAI CEO Sam Altman, Apple CEO Tim Cook and Google CEO Sundar Pichai, had been photographed rubbing elbows with Trump at the White House, but Amodei was conspicuously absent. He didn’t attend Trump’s inauguration last year.
Amodei told staffers earlier this month that the administration doesn’t like Anthropic because it hasn’t donated or offered “dictator-style praise to Trump,” according to a report from The Information.
He apologized for the tone of those remarks in a statement on Thursday, writing that they were written after a “difficult day for the company” and do not “reflect my careful or considered views.”
Amodei has also drawn the ire of David Sacks, the venture capitalist serving as the White House AI and crypto czar.
Sacks accused Anthropic of supporting “woke AI” in October because of its stance on regulation, and of “running a sophisticated regulatory capture strategy based on fear-mongering,” after a company executive wrote an essay titled “Technological Optimism and Appropriate Fear.”
By last month, Anthropic seemed to recognize that it needed to smooth things over. The company appointed Chris Liddell to its board of directors, who had previously served as the deputy White House chief of staff during Trump’s first term.
But in the following weeks, tensions between Anthropic and the DOD would only escalate further.
“This feels to me like a dispute that is about politics and personalities,”Michael Horowitz, a senior fellow for technology and innovation at the Council on Foreign Relations, said in an interview. “It’s masquerading as a policy dispute.”
Federal agencies say they’re ditching Claude
Dario Amodei, co-founder and CEO of Anthropic, during a Bloomberg Television interview in San Francisco, Dec. 9, 2025.
David Paul Morris | Bloomberg | Getty Images
By the time the Trump administration blacklisted Anthropic, the startup’s tools had been widely adopted across government agencies. A transition is already underway, as groups including the U.S. Department of Health and Human Services, the Treasury Department and the State Department have confirmed they are moving off of Claude.
But that process is especially complicated within the DOD, in part because the U.S. is actively carrying out a military operation in Iran. Anthropic’s models have been used to support that operation, even after it was blacklisted, as CNBC previously reported.
Transitioning away from Anthropic toward a new vendor will take the DOD time and comes at a significant cost in terms of efficiency, said Jacquelyn Schneider, a Hargrove Hoover fellow at Stanford University’s Hoover Institution.
“You’re not going to walk away from technologies that are deeply embedded in your wartime processes right before you go to war,” she said in an interview.
Anthropic received official communication that it had been labeled a supply chain risk on Thursday, and Amodei said in a statement that the company has said “no choice” but to challenge the designation in court.
It’s not immediately clear when a lawsuit will be filed, or what the impact will be to Anthropic’s business in the interim. As of Friday, tech companies including Amazon, Microsoft, and Google all said they will continue supporting Claude except for DOD-related work.
Anthropic’s clash with the DOD has caused a real headache for the startup, but it’s also managed to lift its overall profile. The company’s Claude app climbed to the top of Apple’s App Store for the first time late last month, pushing OpenAI’s ChatGPT, widely considered the most popular consumer AI tool, to the No. 2 slot. As of Monday, ChatGPT has reclaimed the top spot.
“Certainly with the events in the last two weeks, if the middle of America didn’t know Anthropic before, they do now,” George Sellner, a senior director and analyst at Gartner, said in an interview.
But those commercial interests can sometimes be at odds with Anthropic’s safety principles. It was only a matter of time before a high-stakes clash, like the company’s dispute with the DOD, spilled into the public, experts said.
By mid February, Anthropic’s negotiations with the DOD had stalled. The startup wanted assurance that its technology wouldn’t be used for fully autonomous weapons or domestic mass surveillance of Americans, while the DOD wanted Anthropic to grant the agency unfettered access to Claude across all lawful purposes.
The organizations failed to reach a compromise, prompting President Donald Trump to direct all federal agencies to “immediately cease” using technology from the “leftwing nut jobs” at Anthropic. Hours later, Anthropic’s rival OpenAI announced it had reached its own agreement to deploy its models within the DOD’s classified network.
WATCH: Why the U.S. Defense Department blacklist of Anthropic is so unprecedented
Discover more from FRESH BLOG NEWS
Subscribe to get the latest posts sent to your email.