The White House of the United States of America Building
Client Alert

President Trump Signs “Take It Down Act” Into Law

May 21, 2025
New federal legislation aims to combat the publication of non-consensual deepfakes and mandates notice-and-removal procedures for covered platforms.

Key Points

  • The legislation requires online services that primarily feature user-generated and curated content (such as social media platforms) to develop and implement notice-and-removal procedures within one year.
  • Criminal liability appears to be aimed at end users that post intimate visual depictions or digital forgeries to internet platforms, rather than the platforms themselves.

On May 19, 2025, President Trump signed into law the “Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act” (the Take It Down Act or the Act).https://www.congress.gov/bill/119th-congress/senate-bill/146/text. The Act, which received widespread bipartisan support, prohibits any person from knowingly publishing “intimate visual depictions” of minors and non-consenting adults. This includes deepfakes, which are images or videos that have been edited or generated by artificial intelligence (AI).

The Act gives certain websites and other online services one year to implement notice-and-removal procedures to enable victims to remove unlawful images. Under the Act, a covered platform that is subject to the Act’s notice-and-removal requirements must remove non-consensual intimate images within 48 hours of receiving a valid removal notice. Failure to comply will be treated as a violation of the Federal Trade Commission Act (FTCA) and subject to Federal Trade Commission (FTC) enforcement.

Notably, the Act disclaims liability for any covered platform that, in good faith, disables or removes content that is reported to the platform as a non-consensual deepfake, even if it is later determined that the content did not violate the Act. In other words, a covered platform that mistakenly removes lawful content in response to a takedown request is protected from legal action by the content’s creator, provided that the platform acted in good faith.

This Client Alert focuses on the obligations of and implications for covered platforms and briefly addresses the Act’s imposition of criminal liability for the publication of non-consensual intimate visual depictions.

Notice-and-Removal Requirements for Covered Platforms

True to its name, Section 3 of the Take It Down Act requires certain online services to take down potentially unlawful content from their platforms. Specifically, the Act requires that, within one year of the Act’s effective date, all covered platforms must establish procedures through which an individual (or the individual’s representative) may request that the platform remove an intimate visual depiction of that individual published on the covered platform without the individual’s consent. Because the Act was signed into law on May 19, 2025, these notice-and-removal requirements will go into effect on May 19, 2026. 

Definition of “Covered Platform”

The Act defines “covered platform” as any “website, online service, online application, or mobile application” that (i) “serves the public” and (ii) “primarily provides a forum for user-generated content, including messages, videos, images, games, and audio files; or for which it is in the regular course of trade or business of the website, online service, online application, or mobile application to publish, curate, host, or make available content of nonconsensual intimate visual depictions.”

The Act expressly excludes from the definition of “covered platform” email and internet service providers, as well as online services that “consist[] primarily of content that is not user generated but is preselected by the provider of such online service, application, or website; and for which any chat, comment, or interactive functionality is incidental to, directly related to, or dependent on the provision of the content[.]” As such, the Act’s notice-and-removal requirements will likely implicate online services that primarily feature user-generated and curated content (such as social media platforms) as opposed to, for example, a website that curates its own original content and merely permits users to engage with the content via a comment section.

Takedown Provisions

Under the Act, covered platforms are required to provide “clear and conspicuous notice” (such as a link to a separate web page or disclosure) that informs users of the notice-and-removal process. This notice must be in plain, easy-to-read language and should provide users with information about the covered platform’s responsibilities under the Act, including how an individual can submit a request for removal. 

The Act only requires that covered platforms respond to “valid” removal requests. In order for a removal request to be considered “valid,” it must be in writing and include (i) a physical or electronic signature of the individual making the request (or their representative); (ii) an identification of, and information reasonably sufficient for the covered platform to locate, the intimate visual depiction in question; (iii) a brief statement of the individual’s good-faith belief that the depiction was not consensual; and (iv) the individual’s contact information. Following the receipt of a valid removal request, covered platforms have 48 hours to remove the content, as well as to make “reasonable efforts to identify and remove any known identical copies.” 

Failure to comply with the Act’s takedown provisions constitutes an unfair or deceptive act under Section 18(a)(1)(B) of the FTCA. The Act vests enforcement authority in the FTC, and violations are subject to the penalties available under the FTCA, which could include civil fines, injunctive relief, and consumer redress.

Notably, the Act only creates liability for covered platforms that fail to remove unlawful deepfakes; it does not consider the removal of lawful content an FTCA violation or otherwise create a cause of action for persons who claim that their content was wrongfully removed. In fact, Section 3(a)(4) of the Act protects a covered platform from liability for “any claim” it may face in the event that the platform makes a good-faith decision to disable or remove content in response to a valid removal request, even if it is later determined that the removed content did not in fact violate the Act. In other words, a covered platform that removes content that it believes in good faith to be unlawful cannot be held liable for such removal, regardless of whether the content was actually unlawful.

Implications for Covered Platforms

By requiring covered platforms to facilitate takedown requests from the public and act on such requests in just 48 hours, the Take It Down Act imposes a significant obligation on covered platforms and incentivizes a somewhat aggressive approach to content moderation. While covered platforms should take steps to review any removal requests they receive in good faith, those platforms should err on the side of removing potentially unlawful deepfakes after receiving a valid removal notice unless it is clear that the content at issue is lawful. If a covered platform removes content that is later determined to be lawful, the Act’s safe harbor will protect the platform from any potential claim by the content creator, provided that the platform was acting in good faith. 

The Take It Down Act imposes a significant obligation on covered platforms and incentivizes a somewhat aggressive approach to content moderation

Conversely, a platform that chooses not to remove content that is later deemed to be unlawful could put itself squarely in the FTC’s enforcement crosshairs. It is possible that these provisions may lead to challenges on First Amendment grounds, since the effect will be to incentivize platforms to remove or suppress content that may not in fact violate the Act. 

Criminal Liability 

Separate from the notice-and-removal requirements discussed above, Section 2 of the Take It Down Act creates criminal liability for “using an interactive computer service to knowingly publish” an “intimate visual depiction” or a “digital forgery” of an identifiable minor or non-consenting adult, except in certain limited circumstances such as complying with a law enforcement investigation. The Act defines “digital forgery” as “any intimate visual depiction of an identifiable individual created through the use of software, machine learning, artificial intelligence, or any other computer-generated or technological means … that, when viewed as a whole by a reasonable person, is indistinguishable from an authentic visual depiction of the individual.” The language of Section 2, while not crystal clear, appears to be aimed at end users that post intimate visual depictions or digital forgeries to internet platforms, rather than the platforms themselves. 

Any person that knowingly publishes non-consensual intimate visual depictions may face fines and/or up to three years in prison under Section 2.

Next Steps

While the Take It Down Act’s notice-and-removal requirements do not take effect until May 19, 2026, companies should consider formulating compliance plans now, including by:

  • Consulting with counsel and compliance experts. Companies that operate online should consult with legal counsel and compliance experts to determine whether they fall under the Act’s definition of a covered platform and, if so, to understand all of the Act’s requirements with respect to notice-and-removal procedures.
  • Designing clear reporting mechanisms. If a platform is subject to the Act’s notice-and-removal provisions, the platform must create and prominently display an accessible and easy-to-use process through which individuals can submit content removal requests. The mechanism should allow for the inclusion of identifying information and a sworn statement of non-consent, as required under the Act.
  • Developing and testing a review and removal protocol. Once a reporting process has been identified, covered platforms must develop a protocol for reviewing and responding to valid removal requests within 48 hours. This protocol will likely require a bespoke approach for each covered platform, taking into account factors such as the size of the platform’s user base, the types of content typically published on the platform, and the number of removal requests the platform anticipates receiving. While the Act does not explicitly require human review or otherwise prohibit the use of automated processes as part of removal procedures, covered platforms should remember that the Act’s safe harbor from liability for removing content that is later determined to be lawful only applies insofar as the covered platform acted in good faith. As such, covered platforms should work with legal counsel to ensure that their removal procedures would comply with that requirement.
  • Identifying and training compliance staff. Covered platforms should identify appropriate staff to oversee the reporting-and-removal process and provide staff with appropriate training to ensure they understand both the Act’s requirements and the platform’s procedures.
  • Updating terms of service and community guidelines. Covered platforms should consider revising existing user policies to reflect the Act’s prohibitions on the distribution of intimate visual depictions without consent and inform users that such content is subject to removal.

Endnotes

    This publication is produced by Latham & Watkins as a news reporting service to clients and other friends. The information contained in this publication should not be construed as legal advice. Should further analysis or explanation of the subject matter be required, please contact the lawyer with whom you normally consult. The invitation to contact is not a solicitation for legal work under the laws of any jurisdiction in which Latham lawyers are not authorized to practice. See our Attorney Advertising and Terms of Use.