Abstract Close up Bright colored LED SMD video wall abstract background
Article

Trump Administration Takes Major Steps Toward Comprehensive Federal AI Regulation

March 26, 2026
The release of the administration’s AI policy framework and the draft TRUMP AMERICA AI Act mark a pivotal moment for US AI regulation, with potentially sweeping implications.

Key Points

  • The White House’s Framework calls for broad federal preemption of existing state AI laws while taking a “light-touch” regulatory approach by using existing agencies.
  • The Framework follows the release of a discussion draft of Senator Marsha Blackburn’s TRUMP AMERICA AI Act, which represents the most comprehensive piece of federal AI legislation proposed in the US to date.
  • If passed, Blackburn’s bill would have broad implications for US AI regulation, including preempting state law, establishing a new liability framework, and rewriting copyright liability.
  • While neither the Framework nor the draft bill is binding, they broadly signal how the Trump administration and key senators are looking to regulate AI technologies.

On March 20, 2026, the Trump administration issued a National Policy Framework for Artificial Intelligence (the Framework) outlining the White House’s non-binding “wish list” for federal AI regulation. The Framework came days after Senator Marsha Blackburn (R-Tenn.) released her TRUMP AMERICA AI Act, a 291-page discussion draft that represents the most comprehensive federal AI legislation proposed in the US. Together, these developments signal significant movement toward comprehensive federal AI regulation.

The Framework is organized around seven pillars focused on protecting vulnerable populations, respecting intellectual property rights, protecting free speech, encouraging AI development, and limiting state influence through federal preemption. Consistent with the administration’s “light-touch” philosophy, the Framework does not recommend creating new federal regulatory bodies, instead calling on Congress to leverage existing agencies and industry-led standards.

Senator Blackburn’s discussion draft bill takes a more prescriptive approach: The bill would create a federal liability framework, establish a duty of care for chatbot developers, declare that unauthorized use of copyrighted works for training does not constitute fair use, and require annual third-party audits for political bias for high-risk systems, among other things.

This article analyzes both the National Policy Framework and Senator Blackburn’s discussion draft; examines areas of alignment and divergence; and discusses practical implications for AI developers, deployers, and businesses navigating this regulatory landscape.

The National Policy Framework for Artificial Intelligence

The White House’s Framework outlines the administration’s legislative priorities across seven pillars, each offering recommendations to protect specific interests while broadly encouraging federal preemption of state AI laws. While the Framework does not bind Congress, it serves as a blueprint for federal legislation that is likely to receive presidential support. Each of the Framework’s seven pillars is discussed below:

I. Protecting Children and Empowering Parents

The Framework prioritizes child safety, urging Congress to build on recent administration actions to protect minors from AI-related harms. Key recommendations include:

  • Parental empowerment tools: AI platforms should provide parents with tools to manage children’s privacy settings, screen time, content exposure, and account controls.
  • Age assurance requirements: Platforms likely to be accessed by minors should implement “commercially reasonable, privacy protective” age-assurance measures, such as parental attestation.
  • Platform safety features: Platforms likely to be accessed by minors should implement features reducing risks of sexual exploitation and self-harm.
  • Child privacy protections: Existing child privacy protections should apply to AI systems, including limits on data collection for model training and targeted advertising.

Notably, the Framework urges Congress not to preempt states from applying general child protection laws to AI technologies.

II. Safeguarding and Strengthening American Communities

The Framework’s second pillar focuses on ensuring that AI benefits American communities while protecting vulnerable populations. Central to this is the administration’s “Ratepayer Protection Pledge,” which reflects a commitment from major AI companies that residential ratepayers will not bear increased electricity costs from AI data center construction. The Framework recommends codifying this pledge into law.

Additional recommendations include:

  • Streamlined permitting: Congress should streamline permitting for AI infrastructure to accelerate power-generation procurement and infrastructure buildout.
  • Protection from AI-enabled fraud: Congress should strengthen law enforcement efforts against AI impersonation scams and fraud targeting vulnerable populations, including seniors.
  • National security: National security agencies should possess the technical capacity to understand frontier AI capabilities and develop mitigation plans in consultation with frontier model developers.
  • Small business support: Congress should provide AI resources to small businesses through grants, tax incentives, and technical assistance programs.

III. Respecting Intellectual Property Rights and Supporting Creators

While the Framework asserts the Trump administration’s view that “training of AI models on copyrighted material does not violate copyright laws,” it “acknowledges arguments to the contrary” and encourages the courts to resolve the issue. Accordingly, the Framework recommends Congress refrain from taking action that would impact judicial resolution of the fair use question.

Other intellectual property recommendations include:

  • Collective licensing frameworks: Congress should consider enabling collective rights systems allowing rights holders to negotiate compensation from AI providers without antitrust liability.
  • Digital replica protections: Congress should consider protecting individuals from unauthorized distribution or commercial use of digital replicas of their voice or likeness, with exceptions for parody, satire, and news reporting.
  • Continued monitoring: Congress should monitor judicial progress and evaluate whether legislation is needed to “fill potential gaps” or provide additional protections for content creators.

IV. Preventing Censorship and Protecting Free Speech

Reflecting the administration’s longstanding concerns about political bias in AI systems, the Framework recommends:

  • Preventing government coercion: Congress should prohibit the federal government from coercing technology providers to ban, compel, or alter content based on partisan or ideological agendas.
  • Providing redress mechanisms: Americans should have a means to seek redress for federal agency efforts to censor expression or control information on AI platforms.

V. Enabling Innovation and Ensuring American AI Dominance

The innovation pillar reflects the administration’s belief that regulatory restraint is essential to American AI leadership. Key recommendations include:

  • Regulatory sandboxes: Congress should establish regulatory sandboxes to encourage AI development and deployment leadership.
  • Access to federal data: Congress should make federal datasets accessible to industry and academia for AI training.
  • No new regulatory bodies: Congress should not create federal rulemaking bodies to regulate AI. Existing agencies and industry-led standards should govern sector-specific AI applications.

VI. Educating Americans and Developing an AI-Ready Workforce

The Framework calls for Congress to ensure American workers benefit from AI-driven growth through education and workforce development. Recommendations include:

  • AI training integration: Existing education and workforce training programs should incorporate AI training through non-regulatory methods.
  • Workforce impact studies: Congress should expand federal study of AI-driven workforce realignment to inform supportive policies.
  • Land-grant institutions: Congress should bolster land-grant institutions to provide technical assistance, demonstration projects, and AI youth development programs.

VII. Establishing a Federal Policy Framework and Preempting State AI Laws

Perhaps the most consequential aspect of the Framework is its call for federal preemption of state AI laws. This initiative is not new — the Trump administration made similar recommendations in its July 2025 AI Action Plan and again in a sweeping December 2025 Executive Order targeting state AI law.

The Framework states that the federal government “must establish a federal AI policy framework to protect American rights, support innovation, and prevent a fragmented patchwork of state regulations that would hinder our national competitiveness, while respecting federalism and State rights.” The preemption recommendations are specific and far-reaching:

  • Preemption of burdensome state laws: Congress should preempt state AI laws from imposing “undue burdens” to ensure a minimally burdensome national standard consistent with these recommendations.
  • Preserved state authority: This national standard should respect federalism by not preempting (1) state police powers to enforce generally applicable laws, including child protection, fraud, and consumer protection; (2) state zoning authority over AI infrastructure placement; and (3) requirements governing states’ own AI use.
  • Developer-focused legislation: States should not regulate AI development “because it is an inherently interstate phenomenon with key foreign policy and national security implications.” Nor should states penalize developers for third-party unlawful conduct involving their models.

The Blackburn Discussion Draft: The TRUMP AMERICA AI Act

Just days before the White House issued its Framework, Senator Blackburn released a discussion draft of the TRUMP AMERICA AI Act (the Blackburn Bill or Bill), a sweeping 291-page bill that is organized around Senator Blackburn’s four “Cs” (children, creators, conservatives, and communities) and incorporates several pieces of bipartisan legislation, including the Kids Online Safety Act and the NO FAKES Act. While the Blackburn Bill is still in the nascent stages of the legislative process, it represents the most comprehensive federal AI legislation proposed in the US and would serve to accomplish much of the Trump administration’s legislative agenda with respect to AI.

While the draft bill covers substantial ground across 17 separate titles, a summary of the Blackburn Bill’s most significant provisions follows:

Safety Standards, Repeal of Section 230, and a New AI Liability Framework

The Blackburn Bill would establish a general duty of care for AI chatbot developers, which broadly would mean exercising “reasonable care in the design, development, and operation” of the chatbot to prevent foreseeable harms to users. The Federal Trade Commission is directed to promulgate minimum safeguards regarding compliance with this reasonable care requirement.

The Blackburn Bill would also enact the Kids Online Safety Act, which would require online platforms to implement safeguards protecting minors, including parental-access tools and harm-reporting mechanisms. Similarly, the Bill’s GUARD Act would prohibit designing or distributing chatbots that solicit minors to engage in sexually explicit conduct or otherwise encourage suicide, self-injury, or physical violence, with fines up to $100,000 per offense.

Of great interest, the Blackburn Bill would completely repeal Section 230 of the Communications Decency Act, eliminating the longstanding exemption that shields platforms from civil liability for third-party content. Such a repeal would have sweeping ramifications for all internet platforms that host user content, not just AI platforms.

Via the AI LEAD Act, the Bill would establish a federal liability framework, enabling the US Attorney General, state attorneys general, and private actors to sue AI developers for harms, such as property damage, physical injury, death, financial or reputational injury, and psychological anguish. Developers would be liable upon proof of failure to exercise reasonable care, breach of express warranty, or a defective condition that proximately caused harm. Deployers are deemed liable as developers if they substantially modify the AI system or intentionally misuse it in a manner that proximately causes harm.

Copyright and Digital Replicas

The Blackburn Bill’s copyright provisions represent a significant departure from both existing law and the Framework’s narrower approach. Title XV would amend the Copyright Act to declare that unauthorized reproduction of copyrighted works for AI training does not constitute fair use, effectively overriding pending judicial determinations and clashing with the Framework’s recommendation to let courts decide the issue.

The Bill would further establish that any AI created through inference or distillation would be deemed to incorporate copyrighted training materials unless the developer proves by clear and convincing evidence that it used only authorized materials or that no copyrighted expression from the source data is embedded in or reproducible by the AI. Title XIII (the TRAIN Act) would enable copyright holders to obtain subpoenas requiring disclosure of copyrighted works used in training, while Title XII (the NO FAKES Act) would protect individuals’ voice and visual likenesses from unauthorized digital replicas.

Bias Protections

Reflecting the administration’s concerns about political bias, Title VIII would require annual third-party audits of high-risk AI systems to detect viewpoint or political affiliation discrimination. Title XVI would codify President Trump’s executive order on preventing “woke AI” in the federal government by limiting agency procurement of LLMs to those that comply with “unbiased AI principles,” including truthfulness, historical accuracy, scientific objectivity, and ideological neutrality. Federal AI contracts would be required to include compliance terms, with decommissioning costs charged to the vendor for noncompliance.

Other Notable Provisions

Other notable provisions in the draft Bill include: requiring certain public and private companies to submit quarterly reports to the Department of Labor on AI-related job loss and other effects; codifying the Ratepayer Protection Pledge by directing the Secretary of Energy to enter agreements with data center operators to protect consumers from rate increases; and authorizing a Center for AI Standards and Innovation at the National Institute of Standards and Technology to develop AI assessment guidelines and synthetic content detection tools.

Preemption

The Blackburn Bill expressly states that it does not preempt any generally applicable law, representing a more limited approach than the White House Framework contemplates. However, some individual titles contain their own preemption provisions. For example, the Kids Online Safety Act preempts conflicting state laws while permitting states to enact greater protections for minors.

Implications for Developers, Deployers, and Other Businesses

Neither the Framework nor the draft Blackburn Bill is binding, and thus they do not alter the current US regulatory landscape. The Framework is merely the White House’s wish list with no binding effect on Congress, and the Blackburn Bill has yet to be formally introduced, much less enacted. However, these developments reflect the latest swell in the administration’s sustained push toward a federal AI regime that would broadly preempt state law and reduce compliance burdens for AI companies.

The Bill will likely undergo (potentially significant) revisions throughout the legislative process, and passage is certainly not guaranteed. Nonetheless, companies developing or deploying AI should monitor its progress and consider preparing for potential new compliance obligations if the Bill, or one similar to it, becomes law. In particular, the Bill’s liability framework would represent a fundamental shift, including strict liability for defective AI products, expanded deployer liability, and restrictions on contractual liability limitations. Deployers who modify or misuse AI systems would face direct exposure. Developers and deployers should review testing protocols, documentation practices, insurance coverage, and vendor indemnification provisions.

Likewise, if the Bill’s fair use exclusion is enacted, training procedures would need to be closely scrutinized, with developers needing to demonstrate authorization for all training materials, audit training data provenance, develop licensing strategies, and maintain robust documentation. However, this provision may face pushback from the White House given its preference to let courts decide the issue.

Platforms accessible to minors should consider how they would implement various child-protection requirements, including protective safeguards, parental access tools, and harm-reporting mechanisms. The sunset of Section 230 would also fundamentally reshape platform liability, requiring companies to revisit content moderation strategies and litigation reserves.

Finally, despite the federal preemption contemplated by both the Framework and the Blackburn Bill, companies should continue to track state-level developments and assume that state AI laws remain enforceable unless expressly preempted by federal law or court ruling.

While the path forward for federal AI legislation remains uncertain, the coordinated release of these documents signals the most serious push toward comprehensive federal AI legislation to date and indicates that the administration will continue to drive toward a unified federal regime. Companies operating in the AI space should use this period to assess their exposure and develop compliance strategies.

Endnotes

    This publication is produced by Latham & Watkins as a news reporting service to clients and other friends. The information contained in this publication should not be construed as legal advice. Should further analysis or explanation of the subject matter be required, please contact the lawyer with whom you normally consult. The invitation to contact is not a solicitation for legal work under the laws of any jurisdiction in which Latham lawyers are not authorized to practice. See our Attorney Advertising and Terms of Use.