Top AI Firms Grant US Government Early Access to Models
In a rapid development, several leading artificial intelligence firms have agreed to provide the U.S. government with early access to their cutting-edge AI models. This decision follows closely on the heels of reports indicating the Trump administration’s intent to increase government oversight of emerging AI technologies.

Major AI Firms Grant US Government Early Access to Models

According to a new report from the Wall Street Journal, tech giants Google, Microsoft, and xAI have formalized an agreement with the Trump administration. This pact grants the government early access to their new frontier AI models before their public release. This move comes just one day after a New York Times report detailed the administration’s considerations for AI model oversight.

Key Players and the Terms of Access

The early access will be facilitated through the Commerce Department’s Center for AI Standards and Innovation (CAISI). This center is tasked with evaluating the capabilities and security protocols of these new AI models. Notably, OpenAI and Anthropic had previously entered into similar agreements with the Commerce Department in 2024. CAISI has already conducted over 40 evaluations on AI models prior to their public launch, underscoring its active role in this evolving landscape.

Shifting Landscape for AI Oversight

These recent collaborations align with the Trump administration’s broader push for cybersecurity and AI standards. The Wall Street Journal also reported earlier this week that the administration is exploring a “cybersecurity-focused executive order.” This order would establish an oversight group specifically dedicated to creating standards for AI models.

The administration’s stance has not been without friction. Earlier this year, a feud with AI company Anthropic saw the U.S. government declare its AI chatbot, Claude, a supply chain risk to national security. This occurred after Anthropic requested that its technology not be used for warfare or mass surveillance. Despite such incidents, the Trump administration has generally maintained a pro-AI stance, emphasizing the necessity for U.S. companies to retain a competitive edge over rivals, particularly from China.

Expert Perspective on Frontier AI Evaluation

Chris Fall, director of CAISI, highlighted the critical importance of this initiative. Fall stated to the WSJ, “Independent, rigorous measurement science is essential to understanding frontier AI and its national security implications.” He added that these “expanded industry collaborations help us scale our work in the public interest at a critical moment.”

What This Means for Future AI Development

The agreements with Google, Microsoft, and xAI signal a growing convergence between government oversight and private sector AI development. This trend suggests a future where national security concerns will increasingly shape the deployment and capabilities of advanced AI models. The ongoing focus on establishing standards and evaluations points to a more regulated, albeit collaborative, environment for AI innovation.

Follow Hashlytics on Bluesky, LinkedIn , Telegram and X to Get Instant Updates