Ad
AI news

CIA to embed AI «co-workers» in every analytic platform within two years, deputy says

Adam Bream
By Adam Bream , Tech Content Writer
CIA to embed AI «co-workers» in every analytic platform within two years, deputy says
Cover © Anonhaven

The Central Intelligence Agency will embed generative AI «co-workers» inside every analytic platform the agency uses within the next two years. Deputy Director Michael Ellis announced the plan on April 9, 2026, at a Special Competitive Studies Project event in Washington. As Defense One reported from the event, the agency ran more than 300 AI projects in 2025. The CIA also recently used AI to generate an intelligence report for the first time in its history.

Within a decade, Ellis said, CIA officers will manage teams of AI agents under what he called an «autonomous mission partner» model. Humans will remain in the decision loop for analytic judgments. The AI tools will draft, edit, triage, and flag, but will not decide.

Ellis used the same speech to draw a line on vendor dependency. He said the CIA «cannot allow the whims of a single company» to constrain its use of AI. He did not name Anthropic. Reporters who covered the event understood the remark as a reference to Anthropic. The company is in an open dispute with the U.S. Department of Defense over use restrictions on its Claude model.

What Ellis announced

The CIA will integrate AI systems described as «co-workers» into the analytic platforms used by intelligence officers. The systems will run on what Ellis called «a kind of classified version of generative AI». They will assist analysts with the foundational tradecraft of intelligence work.

The tasks Ellis listed are basic analyst work. They include drafting key judgments, editing for clarity, comparing drafts against tradecraft standards, testing analytical conclusions, and identifying trends in foreign intelligence. Ellis also mentioned processing large datasets and language translation. None of these are counterintelligence or mole-hunting tasks. They are analytic tradecraft applied to already-collected intelligence.

It won't do the thinking for our analysts, but it will help draft key judgments, edit for clarity and compare drafts against tradecraft standards.

— Michael Ellis, Deputy Director, Central Intelligence Agency

Ellis disclosed two specific operational milestones during the same speech. The CIA tested more than 300 AI projects during 2025. The agency recently used AI to generate an intelligence report for the first time in its history. Ellis did not specify the date, the topic, the AI system, or whether the report was disseminated to executive branch consumers.

The longer-horizon vision is the «autonomous mission partner» model. Within a decade, Ellis said, CIA officers will manage teams of AI agents in a hybrid configuration. The intent is to increase the speed and scale of intelligence work performed by officers in the field and analysts at headquarters. This is the most elevated framing the U.S. intelligence community has used in public for AI tooling.

The Anthropic subtext

Ellis did not mention any AI vendor by name during the speech. Reporters who covered the event understood one section of his remarks as a reference to Anthropic. The framing appears in Defense One, Government Executive, and Nextgov coverage by David DiMolfetta, and in Cointelegraph's Politico-sourced report. Each notes the same point. Ellis did not single out Anthropic. He cautioned that the CIA cannot allow the whims of a single company to constrain its use of AI. The agency is looking to diversify across multiple vendors.

Cannot allow the whims of a single company.

— Michael Ellis, Deputy Director, Central Intelligence Agency

The reference is intelligible only with the dispute as background. Anthropic and the U.S. Department of Defense have been in open conflict since February 2026 over the terms under which Claude can be used in military and federal workflows. The conflict is now in federal court.

The dispute centers on two specific use cases. Anthropic refused to permit Claude to be used for fully autonomous weapons systems and for mass domestic surveillance. The Pentagon position is that contracts with AI vendors should provide flexibility to use the tools for «all lawful purposes». The two positions did not converge.

On February 24, 2026, Defense Secretary Pete Hegseth gave Anthropic CEO Dario Amodei an ultimatum. The deadline was February 27 to lift the restrictions. Anthropic released a statement on February 26 declining. On February 27, President Donald Trump directed federal agencies to cease using Anthropic's products, and Hegseth designated the company a «supply chain risk». CNBC reported that Anthropic is the only American company ever publicly named a supply chain risk by the Department of Defense. The label is traditionally reserved for foreign adversaries.

Anthropic filed suit in a California federal court on March 9, 2026, contesting the designation. On March 26, Judge Rita Lin issued a preliminary injunction blocking the government from enforcing its ban. According to the Tech Policy Press timeline, Judge Lin's 43-page ruling found that the government took retaliatory actions against Anthropic that likely violated the law. The case has attracted amicus briefs from tech sector workers, the American Civil Liberties Union, and almost forty employees from Google and OpenAI. Google's chief scientist signed the brief.

The Pentagon contract at stake is valued at up to $200 million, a small fraction of Anthropic's reported $30 billion annualized revenue run rate. Bloomberg reported the figure on April 7, citing the company in connection with a Broadcom and Google compute deal. Claude is currently the only AI model available in the military's classified systems. Pentagon officials have publicly praised Claude's capabilities. According to Axios reporting in February, Claude was used during a U.S. military operation in January 2026.

Anthropic is in trouble because I fired them like dogs, because they shouldn't have done that.

— President Donald Trump, in an interview with Politico

One day before Ellis spoke, Anthropic announced Project Glasswing, a consortium intended to help secure critical software against AI-driven attacks. Project Glasswing runs on a non-public Anthropic frontier model that the company says has uncovered thousands of vulnerabilities in production code. Ellis did not mention Project Glasswing in his speech. The intelligence community and its industry suppliers are reportedly already examining how a model of that capability could affect cyber missions.

The China frame

Ellis explicitly framed the AI adoption push as a response to technological competition with China. He said the technological gap between the United States and China has narrowed sharply over the past decade.

Five to ten years ago, China was nowhere near America, in terms of technological innovation. That's just not true today.

— Michael Ellis, Deputy Director, Central Intelligence Agency

The CIA has doubled its technology-related foreign intelligence reporting to track how foreign adversaries are using advanced AI. Those products focus on technology use abroad and cover semiconductors, cloud computing, infrastructure, cybersecurity, and research and development. The doubling is a measurable resource shift toward technology and economic intelligence targets.

The cyber mission center

Ellis disclosed in the April 9 speech that the CIA recently elevated its Center for Cyber Intelligence to mission center status. According to The Record, CIA Director John Ratcliffe quietly made the change last October as part of an internal reorganization. The Center for Cyber Intelligence had resided within the CIA's Directorate of Digital Innovation since 2015. Ellis said the elevation is «paying dividends already by allowing us to deploy new tools to the field and gain more access to priority targets». Ellis did not name the tools or the targets.

The framing in Ellis's speech is that the contest of cybersecurity will be a contest of AI capability. He said whoever capitalizes on the best AI models will wield enormous power. The new mission center, in his framing, is the organizational expression of that view inside the CIA.

Ellis and the NSA episode

Michael Ellis has served as Deputy Director of the CIA since February 2025, making him the youngest person and the first millennial in the role. He graduated from Yale Law School in 2011 and is a Federalist Society alumnus. His path to the CIA includes a contested 2020 episode at the National Security Agency. Acting Defense Secretary Christopher Miller ordered NSA director Paul Nakasone to install Ellis as NSA general counsel four days before Trump left office. Nakasone placed Ellis on administrative leave on the first day of the Biden administration pending an inspector general inquiry. Ellis resigned in April 2021. The IG later found no improper influence in his selection. The episode is part of why parts of the intelligence community view Ellis as a political appointee rather than a career official.

The April 9 event was hosted by the Special Competitive Studies Project. SCSP is a non-profit founded by former Google CEO Eric Schmidt in October 2021 to position the United States to win technological competition by 2030.

What is not in the public record

The CIA has not disclosed which AI vendor or vendors will provide the «co-worker» systems for the analytic platform integration. Ellis's «whims of a single company» remark suggests a multi-vendor strategy. No specific vendors have been named for the analytic platform deployment.

Technical architecture is also unstated. Whether the model weights are hosted on agency infrastructure and whether the model is fine-tuned on classified data are both unknown. So is whether inference runs inside agency boundaries or through a cleared cloud environment.

Governance mechanisms for AI-generated analytic products are likewise unclear. Ellis said the AI would help compare drafts against tradecraft standards. He did not say what mechanisms will ensure the AI itself adheres to those standards. Anthropic has not, as of publication, issued a public response to the «whims of a single company» remark.

What this signals

The Ellis speech is the clearest public articulation to date of CIA generative AI integration plans. No major U.S. intelligence agency has previously committed to such a specific timeline for core mission workflows. For commercial AI vendors with national security sales pipelines, the implication is direct. Contractual usage restrictions are now a procurement consideration that buyers will weigh against the underlying capability.

The Anthropic dispute is the immediate context. The principle Ellis articulated applies to all vendors. According to the Atlantic Council, the OpenAI deal that followed Anthropic's refusal triggered a 295% spike in ChatGPT app uninstalls and a #QuitGPT campaign on social media. A 2025 Gallup and SCSP poll found 60% of Americans distrust AI somewhat or fully. Public opinion and procurement pressure are pulling in different directions.

For defenders at organizations outside the U.S. government, the practical takeaway is narrower. Adversary tooling is increasingly likely to incorporate AI capabilities. The analytic and detection tooling defenders use will need to incorporate AI capabilities to remain competitive. Ellis's «contest of cybersecurity will be a contest of AI capability» framing is not new. It is consistent with a year of statements from intelligence community leaders, defense officials, and major commercial AI providers.

What remains unanswered is the specific architecture of the deployment and the vendor or vendors selected for the analytic platform integration. Also unanswered are the governance mechanisms for AI-generated analytic products and the operational details of the first AI-generated intelligence report.

These gaps will close as the «co-worker» capability rolls out across the analytic workforce over the next two years.

Have a story? Become a contributor.

We work with independent researchers and cybersecurity professionals. Send us a tip or submit your article for editorial review.

Questions on the topic

What did the CIA announce about AI co-workers?
CIA Deputy Director Michael Ellis announced on April 9, 2026 that the agency will embed generative AI co-workers across all analytic platforms within two years. The CIA ran 300 AI projects in 2025 and recently used AI to generate an intelligence report for the first time.