EU AI Act: GPAI Model Obligations in Force and Final GPAI Code of Practice in Place
On 10 July 2025, the European AI Office (AI Office) published the final version of the Code of Practice (Code of Practice or CoP) for providers of General Purpose AI (GPAI) models under the EU Artificial Intelligence Act (EU AI Act). The CoP is a voluntary tool, prepared by independent experts and intended to help providers of GPAI models demonstrate compliance with their obligations under the EU AI Act. The CoP includes three chapters: (1) Transparency, (2) Copyright, and (3) Safety and Security.
The CoP imposes extensive obligations on GPAI providers, including specific guidelines for designing compliance and audit structures that may conflict with existing processes and responsibility concepts. However, the CoP also leaves several questions unanswered (such as the criteria for reporting serious incidents), allowing providers of GPAI models some leeway in how to implement the measures in practice.
This article provides an overview and analysis of the structure and key requirements of the CoP, including how they will impact GPAI models in practice.
Background
- Role of the AI Office: The AI Office is a centre of AI expertise established within the European Commission that aims to govern and regulate GPAI models. Under Art. 56 (1) EU AI Act, the AI Office is responsible for drawing up codes of practice to help providers of GPAI models to comply with their obligations under Art. 53 et seqq. EU AI Act. The AI Office also has the power to request information (Art. 91 EU AI Act) and to evaluate GPAI models (Art. 92 EU AI Act).
- Timeline: After publication on 10 July, the European Commission and the AI Board — an advisory body composed of EU Member States — endorsed the CoP on 1 August via so-called adequacy decisions. Subsequently, the EU AI Act’s rules for GPAI models became applicable on 2 August for new models placed on the market after this date; providers of GPAI models already on the market before 2 August 2025 have until 2 August 2027 to bring their models and documentation into compliance.
- Effects: The CoP is not legally binding, and, while providers of GPAI models are invited to join the CoP, doing so is not mandatory. Through its endorsement, the CoP has general validity and presents one option for providers of GPAI models to demonstrate compliance with the EU AI Act. Providers that do not adhere to the CoP must demonstrate compliance with the EU AI Act via other means and are required to explain to the AI Office how the measures they implement ensure compliance with the EU AI Act.
Scope and Definitions
- GPAI models: The CoP only applies to GPAI models; the CoP does not apply to mere AI systems. GPAI models are defined in the EU AI Act as AI models that (i) display significant generality, and (ii) are capable of competently performing a wide range of distinct tasks, regardless of the way the model is placed on the market, and can be integrated into a variety of downstream systems or applications (Art. 3 (63) EU AI Act). The AI Office published its guidelines on the scope of the obligations for providers of GPAI models (Guidelines on obligations for GPAI providers), which include a training compute threshold greater than 10^23 FLOP and a presumption of training compute for text/image generation.
- GPAI models with systemic risk: The Safety and Security chapter only applies to GPAI models with systemic risks, and includes additional, detailed specifications for this category. Systemic risks are defined as those that are specific to the high-impact capabilities of GPAI models, having a significant impact due to their reach, or to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or society as a whole (Art. 3 (65) EU AI Act). Art. 51 EU AI Act sets out specific criteria to determine whether a GPAI model involves systemic risks. So far, only a few, very powerful GPAI models meet these criteria.
- Provider of GPAI models: The CoP applies to providers that place GPAI models on the EU market. This means a natural or legal person or other entity that develops a GPAI model or has such a model developed and places it on the market (Art. 3 (3) EU AI Act). Downstream providers that fine-tune or modify an existing GPAI model (e.g., a company integrating a GPAI model into its products) may also become a provider under the CoP and the EU AI Act under certain circumstances, but only in respect of their modifications. In this case, special provisions under the CoP apply (e.g., with respect to transparency).
- Open-source GPAI: Open-source GPAI models are generally exempt from the CoP, apart from a brief mention in the Copyright chapter. However, this exemption does not apply to GPAI models with systemic risks.
- Small- and medium-sized enterprises (SMEs): The CoP rules are intended to consider the situation and requirements of SMEs, including startups.
Structure and Key Requirements
This section summarises the key requirements under each of the CoP’s three chapters.
Transparency
- Model documentation form: Providers of GPAI models must complete a detailed model documentation form to comply with their transparency obligations towards the AI Office, national supervisory authorities, and downstream providers under Art. 53(1)(a) and (b) EU AI Act. The form covers, inter alia, information on the technical properties, the training, the energy consumption, and the intended use of a GPAI model. The model documentation form differentiates between information intended for downstream providers (who integrate a GPAI model), the AI Office, or competent national supervisory authorities. The documentation will be drawn up for each model version and will remain available for 10 years. The AI Office has also published a mandatory template for all providers of GPAI models to complete in order to comply with their obligations to provide a public summary of the model’s training data under Art. 53(1)(d) EU AI Act. For CoP signatories, this training data template complements the model documentation form in the CoP to address the range of transparency obligations under Art. 53.
- Publication obligations: Under the CoP, providers of GPAI models are required to publicly disclose (e.g., on their website) contact information for the AI Office and downstream providers to request access to the relevant information. GPAI providers can be required to provide additional information to downstream providers within 14 days from the request if the request is “relevant for its integration” and enables “those downstream providers to comply with their obligations pursuant to the AI Act”. Providers of GPAI models are, however, seemingly allowed to protect their IP, trade secrets, and confidential information in responding to the requests. Providers of GPAI models are also encouraged to consider whether information documented in the model documentation form could, at least in part, be disclosed to the public to promote transparency.
- Quality control: Providers of GPAI models must ensure that the documented information is controlled for quality and integrity, retained as evidence of compliance with obligations in the EU AI Act, and protected from unintended alterations.
Copyright
The Copyright chapter emphasises the importance of AI models respecting intellectual property rights. It serves as a guiding document for demonstrating compliance with the obligations provided for in Art. 53(1)(c), EU AI Act, pursuant to which GPAI providers must implement a policy to comply with EU legislation on copyright and related rights (the Copyright Policy). In particular, they must identify and comply with a reservation of rights expressed by rightsholders pursuant to Art. 4(3) of Directive (EU) 2019/790 (the opt-outs).
- Copyright policy: Providers of GPAI models are required to develop, maintain, and implement a copyright policy. This policy must be overseen by designated individuals in the respective company. Providers are encouraged to publish and maintain a summary of the policy.
- Requirement for the use of web crawlers: When training GPAI models, providers of GPAI models that are using web crawlers commit to abide by certain limitations and measures.
- Reproduction and extraction of only lawfully accessible content: Providers of GPAI models must respect “technological measures” (as defined in the InfoSoc DirectiveDIRECTIVE 2004/48/EC.) and must commit to exclude “pirate” websites — a list of which will be made publicly available on an EU website. The CoP’s wording suggests that subscription models and paywalls should be treated as “technological measures”, which could signal an increased risk of claims under the national implementations of Art. 6 of the InfoSoc Directive.
- Identification of, and compliance with, rights reservations: Providers of GPAI models commit to only employ web crawlers (or have such web crawlers employed on their behalf) that follow the Robot Exclusion Protocol (robots.txt), as specified in the Internet Engineering Task Force, and to identify and comply with other appropriate machine-readable protocols to express opt-outs. Improving the participation and information of rightsholders is one of the objectives of the EU AI Act, which is reflected in the CoP. In this respect, providers of GPAI models are encouraged to engage on a voluntary basis in discussions with rightsholders as part of the “inclusive process”, with the aim of developing appropriate machine-readable standards and protocols to express opt-outs. They also commit to take appropriate measures to enable rightsholders to obtain information about the web crawlers employed, their robots.txt features, and other measures that the GPAI model provider adopts to identify and comply with opt-outs.
- Providers of online search engines: When providers of GPAI models also provide (or control) an online search engine, they are encouraged to take appropriate measures to ensure that their compliance with an opt-out in the context of text and data mining activities and the training of GPAI models does not directly lead to adverse effects on the indexing of the content, domain(s), and/or URL(s) — for which a rights reservation has been expressed — in their search engine.
- Implementation of safeguards: When using or licensing a model for integration into an AI system, providers of GPAI models are required to implement appropriate and proportionate technical safeguards to prevent models from generating outputs infringing copyright. Further, they are required to prohibit copyright-infringing uses of a model in acceptable use policies, terms and conditions, or in the documentation accompanying the model when open-source.
- Communication with rightsholders: Providers must also designate a point of contact for electronic communication with rightsholders and provide easily accessible information about that point of contact. They are also required to provide a mechanism to allow rightsholders and their representatives to submit complaints and offer easily accessible information. Regarding complaints, providers of GPAI models are obliged to act in a diligent, non-arbitrary manner and within a reasonable time. Exceptions apply only if a complaint is manifestly unfounded or if the provider has already responded to an identical complaint by the same rightsholder.
- Extraterritoriality and compliance with EU legislation: Recital 106 of the EU AI Act states that providers placing GPAI models on the EU market “should put in place a policy to comply with Union law on copyright …” regardless of the jurisdiction in which the copyright-relevant acts underpinning the training of those GPAI models take place. While both the first and second drafts of the CoP referenced Recital 106 and the potential imposition of extraterritorial obligations, the final version of the Copyright chapter omits any mention of it. This arguably suggests a deliberate decision by the drafters to distance themselves from a non-binding recital of the EU AI Act that continues to be the subject of enormous controversy; significant ambiguity remains as to the enforceability of Recital 106 and the extent to which it may or may not influence legal interpretation or policy implementation. The requirement in the CoP to adopt a policy ensuring compliance with Union law on copyright reflects Art. 53 EU AI Act, however, it does not offer further insight into how this requirement must be complied with in practice. There is no such thing as a single, self-contained body of Union copyright law that individuals or companies can directly comply with. The content of the applicable Union law depends entirely on how each of the 27 Member States transposed the copyright directives into their national laws.
Safety and Security
The Safety and Security chapter provides an extensive set of rules and requirements for providers of AI models with systemic risks. This includes processes to identify, analyse, assess, and mitigate systemic risks. The chapter also defines requirements for the design of compliance systems and outlines how to allocate responsibilities and resources. Further, the chapter specifies requirements for reporting serious incidents under Art. 55 EU AI Act.
Compliance with the standards set out in the Safety and Security Chapter can reduce legal risks for providers of GPAI systems. However, implementing the respective requirements will also entail significant administrative efforts.
- Safety and Security Framework: To outline the systemic risk management process, providers of GPAI models are required to create and maintain a state-of-the-art Safety and Security Framework (Security Framework). The Security Framework must be set up (i) no later than four weeks after the provider has informed the European Commission under Art. 52 EU AI Act that its AI model qualifies as a GPAI model, and (ii) no later than two weeks before the GPAI model is placed on the market. Providers of GPAI models also need to notify the AI Office of their Security Framework.
- Risk identification, analysis, and acceptance: In addition, providers of GPAI models need to implement a more specific process to (i) identify systemic risks, (ii) conduct a systemic risk analysis, and (iii) determine whether systemic risks stemming from their GPAI model are acceptable.
- Risk identification: Potential risks inherent to GPAI models in general as well as specific risks stemming from the analysed GPAI model need to be considered.
- Systemic risk analysis: This analysis is intended to facilitate the risk acceptance determination, which includes five elements for each identified systemic risk: (i) compiling model-independent information, (ii) conducting GPAI model evaluations, (iii) modelling the systematic risk, (iv) estimating the systematic risk, and (v) conducting post-market monitoring (e.g., through end-user feedback and providing adequate reporting channels). To facilitate post-market monitoring, providers of GPAI models are also required to provide access to their model to external evaluators.
- Acceptability of systemic risks: To determine if a systemic risk is acceptable, providers of GPAI models are required to document and justify which systemic risks are acceptable based on pre-defined “appropriate systemic risk tiers”.
- Mitigating measures: To ensure that systemic risks are acceptable, providers of GPAI models must also implement appropriate mitigation measures along the entire model life cycle. This can include filtering and cleaning of training data, monitoring of the input and/or output of GPAI models, offering risk mitigation tools to other actors, and techniques that provide safety guarantees concerning the behavior of a GPAI model. In addition, providers of GPAI models also need to implement an adequate level of cybersecurity for their models and physical infrastructure.
- Safety and Security Model Reports: Before placing a GPAI model on the market, providers are required to share information about their model and the systemic risk analysis with the AI Office. To do so, providers of GPAI models must prepare a Safety and Security Model Report (Model Report) that includes detailed information on the design of the GPAI model, information on conducted systemic risk assessments, and available reports from external evaluators or security auditors. The Model Report should be updated throughout the life cycle of the GPAI model if providers have reasonable grounds to believe that their justification for the acceptability of systemic risks has been materially undermined (e.g., if the use cases of a GPAI have materially changed or in case of serious incidents).
- Responsibility allocation: Providers of GPAI models are also required to (i) define responsibilities to manage systemic risks across all levels of their organisation, (ii) allocate appropriate resources to manage systemic risks, and (iii) promote a healthy risk culture. In this regard, the CoP provides specific guidelines for the distribution of responsibilities, such as assigning supervisory duties to the management board and transferring corresponding obligations within reporting lines.
- Serious incident reporting: In addition, providers of GPAI models need to implement a process to document and report information about serious incidents to the AI Office and the national supervisory authorities (where relevant). The CoP also defines minimum standards for the information to be provided in respective reporting as well as a set of staggered timelines for the reporting of varying severity.
- Documentation, transparency, and updates: To complement the measures described above, providers of GPAI models must document the implementation of the Safety and Security chapter. The documentation must be kept up to date and retained for at least ten years after the model has been placed on the market. If required to assess or mitigate systemic risks, providers of GPAI models are also required to publish summarised versions of their Security Framework and Model Reports.
Impact and Enforcement
- Potential enforcement risks: The AI Office has the power to enforce non-compliance with the EU AI Act’s requirements for providers of GPAI models. In particular, the AI Office will be able to impose fines of up to 3% global annual turnover or €15 million (whichever is higher), under Art. 101 EU AI Act when it comes into force on 2 August 2026. Compliance with the CoP does not exclude the imposition of fines, however, the AI Office will account for commitments made in accordance with the CoP when assessing the amount of a fine. In this context, adherence to the CoP may be a tool to mitigate sanctions. In particular, in the FAQs to the Code of Practice, the AI Office stated that it will not necessarily consider CoP signatories to have broken their commitments under the code and will not impose penalties for GPAI violations if they don’t immediately implement it entirely. Instead, the AI Office will assume such signatories to be acting in good faith, whilst noting that the Commission will fully enforce the GPAI requirements from 2 August 2026.
- Impact of the Code of Practice: As outlined above, compliance with the CoP does not exclude the risk of potential fines for providers of GPAI models. To date, whether and how the AI Office may handle investigations against signatories remains unclear. Such proceedings are likely to focus particularly on whether the measures implemented by the provider meet CoP requirements.
- Reducing the risk of civil claims: Compliance with the CoP may reduce the risk of civil claims. The EU AI Act does not establish any dedicated legal bases for claims for material or immaterial damages by affected individuals, however, individuals must be expected to invoke Art. 82 of the EU GDPR, or other provisions under EU and Member State law, to claim damages in case of non-compliance with the EU AI Act. Comprehensive documentation prepared under the CoP may help providers of GPAI models (and potentially downstream providers) to defend against respective claims for damages.
- Other potential means to demonstrate compliance: Providers of GPAI models who do not join the CoP have the option to demonstrate compliance with the requirements under Art. 53 et seqq. EU AI Act by other appropriate means. The burden is on the GPAI provider to demonstrate their compliance with the EU AI Act and explain to the AI Office how the measures they have taken meet EU AI Act requirements. Due to practical reasons, the AI Office may expect such providers of GPAI models to adopt standards that are comparable to the CoP; although to hold such providers to the requirements of the CoP specifically would undermine the voluntary nature of the CoP.
Outlook
While some of the requirements enshrined in the CoP are capable of shaping industry norms, particularly around transparency, complaint handling, and the treatment of paywalled content, the most controversial aspect of the CoP — the potential for extraterritorial copyright obligations — has been omitted, leaving providers of GPAI models with a degree of interpretative and operational flexibility. Without any clear reference to the extraterritorial application of the CoP’s Copyright chapter — and with signatories able to sign up to some but not all of the CoP chapters — signatories may find the CoP is a useful tool to signal alignment with evolving expectations without materially constraining their current practices.