Digital generated image of abstract flowing data made out of glowing blue and red splines moving away from camera on black background.
Client Alert

President Trump’s AI Action Plan: Key Insights

July 31, 2025
The plan seeks to limit AI regulation at the federal and state level, encourages rapid development of AI infrastructure, and warns against ideological bias in models.

Key Points

  • The AI Action Plan outlines more than 90 policy recommendations for federal agencies focused on promoting innovation, building infrastructure, and protecting national security as it relates to the proliferation of AI technologies.
  • The policy recommendations cover a broad range of topics, including reducing AI regulation, promoting the distribution of open-source AI models and datasets, eliminating ideological bias in AI models, training workers to use AI, facilitating the rapid development of AI-related infrastructure, and increasing AI exports.
  • If implemented, the Action Plan’s policies are likely to impose substantive new obligations on AI developers and deployers, particularly those that contract with the federal government.
  • New executive orders mandate further rulemaking by federal agencies in the areas of AI procurement, infrastructure, and exports.

On July 23, 2025, the Trump administration released a 28‑page AI strategy document titled “Winning the Race: America’s AI Action Plan” (the Action Plan or Plan). The Action Plan was drafted pursuant to Executive Order 14179, which directed certain of the president’s advisers and other officials to develop an action plan intended to “sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.”

While the Action Plan is not a formally binding document and as such does not require federal agencies (or any private sector entities) to take specific actions, it offers a broad set of “Recommended Policy Actions” for federal agencies to consider across a sweeping range of topics. 

In parallel, President Trump signed three executive orders that advance the core principles outlined in the Action Plan by restricting federal government procurement of “biased” AI models, streamlining the permitting and approval processes for data centers and other AI infrastructure, and promoting a global export strategy for American AI systems. Notably, these orders do impose binding obligations on federal agencies. 

Together, these measures represent the clearest and most comprehensive guidance that the Trump administration has issued to date with respect to AI. They mark another stark and deliberate pivot from the Biden-era emphasis on risk management toward deregulation, rapid development, and solidifying the US’s global influence in AI.

This Client Alert analyzes the Action Plan and concurrent executive orders and highlights the implications for private sector entities.

Overview

The Action Plan outlines more than 90 Recommended Policy Actions, which are divided into three pillars:

I.      Accelerating AI Innovation
II.     Building American AI Infrastructure
III.    Leading International AI Diplomacy and Security

Pillar I: Accelerating AI Innovation

The first pillar of the Action Plan details 15 principles that focus on reducing regulatory barriers to AI innovation and establishing ground rules for AI procurement by the federal government. Below we discuss the principles that are most likely to affect private sector companies that develop or implement AI.

Limiting AI Regulation at the Federal and State Level

Federal

The Action Plan confirms the Trump administration’s intent to avoid implementing onerous AI-focused regulation or legislation at the federal level, particularly where such regulation or legislation would restrict AI development. This is consistent with the administration’s approach on AI to date, which has included rescinding President Biden’s executive order on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” (for more on that order, see this Latham Client Alert) and issuing a new executive order in January focused on strengthening the US as a global AI power titled “Removing Barriers to American Leadership in Artificial Intelligence.”

The Plan also suggests a reduced focus on federal regulatory enforcement. Specifically, the Plan recommends that the Federal Trade Commission (FTC) review all of its investigations, orders, consent decrees, and injunctions initiated or entered into during the Biden administration to ensure they do not unduly burden AI innovation. Similarly, all federal agencies are instructed to identify policies that may hinder AI development and either revise or repeal them. 

Notably, the FTC announced an enforcement initiative last September called “Operation AI Comply,” which sought to crack down on companies using AI to deceive consumers or engaging in “AI washing” by overstating or falsely claiming that their products use AI.

State

The Action Plan also targets state-level AI regulation. While acknowledging that the federal government should not “interfere with states’ rights to pass prudent laws that are not unduly restrictive,” the Action Plan recommends that federal agencies that have AI-related discretionary funding consider a state’s “regulatory climate” when making AI-related funding decisions and not provide such funding to states with burdensome AI regulations. 

This approach is reminiscent of the proposed 10-year state-law moratorium that Republicans introduced in a draft of the House reconciliation bill earlier this year. Initially, the provision proposed an outright ban on enforcement of AI-focused state laws for 10 years; it was later revised to instead restrict certain federal funding to states that sought to enforce such laws. 

Although the moratorium provision was ultimately removed from the final version of the bill, the Action Plan renews the Trump administration’s efforts to limit the scope of AI legislation at the state level through federal funding decisions. President Trump further emphasized this point to the media in announcing the Action Plan, telling reporters that the US must “have a single federal standard, not 50 different states regulating this [AI] industry.”

Removing “Ideological Bias” From AI in the Federal Government

Another core principle of the Action Plan is to ensure that AI tools and guidance deployed by the federal government are free from “misinformation.” Along these lines, the Plan calls for the Department of Commerce to amend the National Institute of Standards and Technology (NIST) AI Risk Management Framework to eliminate references to climate change and DEI initiatives, among other things.

The Plan instructs federal agencies to procure only frontier large language models (LLMs) that are “objective and free from top-down ideological bias.” While the Plan does not define this bias in more detail, President Trump signed a concurrent executive order that sheds more light on this initiative titled “Preventing Woke AI in the Federal Government” (the Bias Order).

The Bias Order directs federal agencies to ensure that the LLMs they procure comply with two overarching principles: “Truth-Seeking” and “Ideological Neutrality” (together, the Unbiased AI Principles). 

The Bias Order defines Truth-Seeking as providing truthful outputs in response to user prompts that seek facts, including by “prioritiz[ing] historical accuracy, scientific inquiry, and objectivity, and [] acknowledg[ing] uncertainty where reliable information is incomplete or contradictory.” 

The Bias Order describes Ideological Neutrality as encompassing AI tools that “do not manipulate responses in favor of ideological dogmas such as DEI.” The Bias Order directs developers not to “intentionally encode partisan or ideological judgments into an LLM’s outputs unless those judgments are prompted by or otherwise readily accessible to the end user.”

Under the Bias Order, the Director of the Office of Management and Budget (OMB) must issue further guidance within 120 days regarding the implementation of the Unbiased AI Principles. The Bias Order lays out certain criteria for OMB’s guidelines, including that they must permit vendors to comply with the Ideological Neutrality principle by disclosing an LLM’s “system prompt, specifications, evaluations, or other relevant documentation,” rather than being required to disclose model weights or other sensitive technical data.

Lastly, the Bias Order directs federal agencies to include provisions in future LLM procurement contracts (and, where possible, to revise existing contracts) that require vendors to guarantee that the LLM complies with the Unbiased AI Principles and to pay the costs associated with decommissioning the LLM if the federal agency terminates the contract due to the vendor’s noncompliance.

Fostering AI Innovation and Encouraging Broad AI Adoption

The Action Plan describes a number of principles that broadly focus on removing barriers to AI innovation and setting the table for broad AI adoption both within the federal government and the private sector.

For example, the Plan lauds the benefits of open-source and open-weight models and calls on the federal government to encourage development of such models by, among other things, accelerating the maturation of a “healthy financial market for compute” in order to provide better access to large-scale computing power for academics and startups; increasing the research community’s access to private sector computing, models and data; and driving adoption of open-source and open-weight models by small- and medium-sized business. 

The Plan also calls for publishing a new National AI Research & Development Strategic Plan to help guide federal AI investments, and encourages the federal government to invest in theoretical, computational, and experimental AI research in order to hasten the development of new and transformational AI technologies. Notably, the Plan does not discuss the issue of whether a provider of an open-source model can be held liable for downstream uses of such model, which was an issue of intense scrutiny in California’s Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, which was vetoed by Governor Gavin Newsom on September 29, 2024.

In order to encourage broad AI adoption, the Plan recommends establishing regulatory sandboxes through agencies like the Food and Drug Administration and the Securities and Exchange Commission to allow companies to rapidly test and deploy AI tools under reduced regulatory scrutiny. The Plan also proposes the creation of an AI procurement toolbox for federal agencies to ease the adoption of AI tools and encourage uniformity in AI adoption across the federal government.

Pillar II: Building American AI Infrastructure

The Plan’s second pillar focuses on increasing the development of AI infrastructure within the US to keep pace with the rapid growth and adoption of AI tools. Below we discuss the principles that are most likely to impact private sector companies that develop or implement AI.

Streamlining Permitting Processes for Data Centers, Semiconductor Manufacturing, and Other AI Infrastructure

The Plan aims to simplify the processes through which data centers, semiconductor manufacturing facilities, and other AI infrastructure projects are permitted and approved. The Plan outlines a number of proposed policy actions that seek to accomplish this goal, all of which are expanded on in a concurrent executive order titled “Accelerating Federal Permitting of Data Center Infrastructure” (the Infrastructure Order). The Infrastructure Order aims to “facilitate the rapid and efficient buildout” of data centers and associated infrastructure by “easing Federal regulatory burdens” and “utilizing federally owned land and resources.” Among other things, the Infrastructure Order:

  • Directs the Secretary of Commerce to provide financial support for “Qualifying Projects,”“Qualifying Project” is defined as: 
    i. a Data Center Project or Covered Component Project for which the Project Sponsor has committed at least $500 million in capital expenditures as determined by the Secretary of Commerce;
    ii. a Data Center Project or Covered Component Project involving an incremental electric load addition of greater than 100 MW;
    iii. a Data Center Project or Covered Component Project that protects national security; or
    iv. a Data Center Project or Covered Component Project that has otherwise been designated by the Secretary of Defense, the Secretary of the Interior, the Secretary of Commerce, or the Secretary of Energy as a “Qualifying Project.”

    “Data Center Project” is defined as “a facility that requires greater than 100 megawatts (MW) of new load dedicated to AI inference, training, simulation, or synthetic data generation.” 

    “Covered Component Project” is defined as “infrastructure comprising Covered Components, or a facility with the primary purposes of manufacturing or otherwise producing Covered Components.”

    “Covered Components” is defined as “materials, products, and infrastructure that are required to build Data Center Projects or otherwise upon which Data Center Projects depend, including:
    i. energy infrastructure, such as transmission lines, natural gas pipelines or laterals, substations, switchyards, transformers, switchgear, and system protective facilities;
    ii. natural gas turbines, coal power equipment, nuclear power equipment, geothermal power equipment, and any other dispatchable baseload energy sources, including electrical infrastructure (including backup power supply) constructed or otherwise used principally to serve a Data Center Project;
    iii. semiconductors and semiconductor materials, such as wafers, dies, and packaged integrated circuits;
    iv. networking equipment, such as switches and routers; and
    v. data storage, such as hardware storage systems, software for data management and protection, and integrated services that work with public cloud providers. 
     
    including loans and loan guarantees, grants, tax incentives, and offtake agreements. Federal agencies are directed to identify existing financial support that can be used to assist Qualifying Projects. The Infrastructure Order clarifies that this financial assistance will not be considered a “major Federal action” under the National Environmental Policy Act (NEPA). 
  • Seeks to streamline approval processes for data centers, including by directing federal agencies to coordinate with the White House Council on Environmental Quality to identify categorical exclusions to NEPA that could facilitate the construction of Qualifying Projects.
  • Directs the Environmental Protection Agency (EPA) to develop or modify regulations promulgated under various environmental laws, such as the Clean Air Act, Clean Water Act, and the Comprehensive Environmental Response, Compensation and Liability Act, in order to expedite permitting applications.
  • Directs the departments of the Interior, Defense, and Energy to identify federal lands that may be suitable for Qualifying Projects and provide necessary authorizations for use.
  • Directs the EPA to promptly identify brownfield and Superfund sites for use by Qualifying Projects and to develop guidance to expedite the environmental review process.
  • Allows the Executive Director of the Federal Permitting Improvement Steering Council to designate Qualifying Projects as “transparency projects” under the FAST-41 program, which is designed to improve federal agency coordination and timeliness of environmental reviews for infrastructure projects.
  • Revokes the executive order issued by President Biden on January 14, 2025, titled “Advancing United States Leadership in Artificial Intelligence Infrastructure,” which set out guiding principles and imposed certain obligations regarding the development on AI infrastructure in the US.

For more information on the Infrastructure Order and its potential implications, see this Latham blog post

Developing the Power Grid to Meet AI Demands

Both the Action Plan and the Infrastructure Order call for expanding the capacity of the US power grid to keep pace with increasing AI needs. This includes stabilizing the existing power grid by, among other things, preventing the premature decommissioning of power generation resources and optimizing existing grid resources to increase efficiency and performance. However, the Plan and the Infrastructure Order propose dispatching new power sources as quickly as possible — including natural gas, coal, geothermal, and nuclear power sources — and reforming markets to align financial incentives for investing in new power sources to match the US’s growing power needs.

Improving Cybersecurity and AI Incident Response Capability

Finally, the second pillar introduces a number of principles that collectively aim to bolster security in AI systems and improve the federal government’s ability to respond to critical AI incidents.

For instance, the Action Plan proposes establishing an AI Information Sharing and Analysis Center led by the Department of Homeland Security (DHS), which would promote sharing AI-related security threats and intelligence across critical infrastructure sectors. The Plan also calls on DHS to issue and maintain guidance for private sector entities on remediating and responding to AI-specific threats, and for all federal agencies to share known vulnerabilities and threats with relevant stakeholders in the private sector.

The Plan further seeks to encourage the development of “secure by design” technologies that are not susceptible to adversarial attacks or malicious inputs, including by refining the Department of Defense’s Responsible AI and Generative AI Frameworks, Roadmaps, and Toolkits. Along the same lines, the Plan encourages the federal government to prepare for potential AI-related incidents by developing and implementing best practices and response frameworks for both the public and private sectors.

Pillar III: Leading International AI Diplomacy and Security

The final pillar of the Action Plan seeks to mobilize the Commerce and State Departments to export full-stack US AI technologies to allied nations and tighten export controls to restrict access and influence by US adversaries. Below we discuss the principles that are most likely to impact private sector companies that develop or implement AI.

Exporting American AI to Allies and Partners

The Action Plan seeks to prevent international reliance on foreign AI technologies by operationalizing a program to gather proposals and facilitate deals with US allies and partners that meet US-approved security requirements and standards.

In connection with this principle, President Trump issued a concurrent executive order titled “Promoting the Export of the American AI Technology Stack” (the Export Order). The Export Order requires that, within 90 days of the Export Order, the Secretary of Commerce must implement an American AI Exports Program that will be open to proposals from “industry-led consortia.” 

In order to be considered for inclusion in the Program, each proposal must:

  • Include a full-stack AI technology package, which encompasses:
    • AI-optimized computer hardware, data center storage, cloud services, and networking;
    • data pipelines and labeling systems;
    • AI models and systems;
    • measures to ensure security and cybersecurity of AI systems; and
    • AI applications for specific use cases.
  • Identify specific target countries or regions for export engagement.
  • Describe a business and operational model to explain who will build and maintain associated infrastructure.
  • Detail requested federal incentives and support mechanisms.
  • Comply with US export controls.

After the Secretary issues a public call for proposals, parties will have 90 days to submit their proposals. Proposals that are selected will be designated as “priority AI export packages” under the Program.

The Export Order then calls for the Economic Diplomacy Action Group (EDAG) to “coordinate mobilization of federal financing tools in support of priority AI export packages.” Specifically, EDAG is charged with, among other things, developing and executing a unified federal government strategy to promote the export of American AI technologies; aligning technical, financial, and diplomatic resources to accelerate deployment of priority AI export packages; analyzing market access, including technical barriers to trade and regulatory measures that may impede the competitiveness of US offerings; and facilitating investment in US small businesses for the development of AI technologies and the manufacture of AI infrastructure, hardware, and systems.

Strengthening Export Controls

The Action Plan aims to strengthen the US’s export controls by increasing export control enforcement over AI compute and plugging loopholes in existing semiconductor manufacturing export controls.

Specifically, the Action Plan calls for the federal government to track the movement of advanced chips to ensure they are not being diverted to adversarial countries. It also proposes increasing global chip export control enforcement, including by monitoring emerging technology developments in AI compute to ensure coverage over areas where chips may be diverted.

Likewise, the Plan directs the Department of Commerce to develop export controls over component sub-systems necessary for semiconductor manufacturing, as the US’s current approach is to implement such export controls only over major systems (but not component sub-systems).

Solidifying the US’s Global AI Influence

The Action Plan calls for the US to advocate for international governance standards that promote innovation and “counter authoritarian influence,” particularly from China. In connection with this goal, the Plan proposes that the federal government partner with frontier AI developers in order to evaluate potential national security risks arising from frontier AI systems, including by evaluating vulnerabilities and “malign foreign influence” arising from the use of adversaries’ AI systems in critical infrastructure.

Implications for Private Sector Entities

The content of the Action Plan is not necessarily surprising, given that the Trump administration has consistently expressed its goal to reduce regulation of the AI industry in the US in order to promote AI development and innovation. However, the Plan and its corresponding executive orders are the most concrete steps the administration has taken to date to implement and operationalize these goals.

The Action Plan serves as confirmation that the Trump administration will not only avoid implementing much (if any) substantive AI law at the federal level, but also explore methods to pressure states (like California and New York) that are at the forefront of AI regulation in the US. For now, states are still free to impose AI laws as they see fit — but the policy recommendations in the Plan mark a clear and concerted effort to discourage state-level AI regulation and may well be a harbinger of future efforts to scale back state AI law.

Further, while the Action Plan does not impose any direct requirements on private sector entities, the Plan details dozens of policy actions for federal agencies that, if implemented, will undoubtedly impact AI developers and deployers, particularly if they contract with the federal government.

For instance, the Action Plan’s edict to federal agencies to avoid procuring LLMs that pose a risk of “ideological bias” could create an array of new obligations and risks around explainability and bias reduction for developers that provide AI systems to federal agencies. While the accompanying Bias Order suggests that such bias would occur where a developer “intentionally encode[s] partisan or ideological judgments into an LLM’s outputs,” neither the Bias Order nor the Action Plan clearly define the full scope of outputs that could conceivably constitute biased content. Moreover, developers of models that lack output explainability may find it difficult to establish how a model arrived at a specific output and that no ideological judgments were intentionally encoded.

Fortunately, the Bias Order requires OMB to issue guidance before the end of 2025 on the implementation of this policy, meaning developers should gain more clarity on how this principle will be imposed in the coming months. For now, both the Bias Order and the Action Plan identify several topics — namely, DEI and climate change — that will clearly be in agency crosshairs when it comes to evaluating potential ideological bias in model outputs. As such, developers that wish to contract with federal agencies may consider taking preemptive steps, such as red teaming and adversarial testing, to explore when and how a model may be prompted to discuss potentially sensitive topics.

The Action Plan will also undoubtedly create opportunities for AI developers and deployers as well. For example, the Plan’s desire to increase American AI exports should present opportunities for companies that can offer full-stack AI packages to increase their market overseas. 

Likewise, the Action Plan’s efforts to streamline approval processes for infrastructure and reinforce the US power grid could open the door for more companies to construct or expand data centers. That said, while the Infrastructure Order emphasizes the need to grow and stabilize the power grid in order to support energy-intensive data center infrastructure — listing natural gas turbines, coal power equipment, nuclear power equipment, and geothermal power equipment as examples — neither the Plan nor the Infrastructure Order mentions any renewable energy sources like wind and solar power, which have been fundamental to many recent data center developments. The exclusion of renewable energy may pose challenges for tech companies’ efforts to meet their climate goals by expanding investments in utility-scale wind and solar projects.

The total impact of the AI Action Plan may not become fully apparent until federal agencies begin implementing its 90-plus Recommended Policy Actions over the coming months. But the Plan establishes a clear set of priorities within the Trump administration and may well serve as a blueprint for how the administration intends to approach AI over the next four years.

Endnotes

    This publication is produced by Latham & Watkins as a news reporting service to clients and other friends. The information contained in this publication should not be construed as legal advice. Should further analysis or explanation of the subject matter be required, please contact the lawyer with whom you normally consult. The invitation to contact is not a solicitation for legal work under the laws of any jurisdiction in which Latham lawyers are not authorized to practice. See our Attorney Advertising and Terms of Use.