Digital generated image of abstract flowing data ramp made out of numbers and glowing blue splines moving up on black background.
Client Alert

European Commission Publishes Draft Guidance on Reporting Serious AI Incidents

October 28, 2025
The draft provides further detail on the reporting obligations under Article 73 of the EU AI Act; companies may submit comments until 7 November 2025.

On 26 September 2025, the European Commission (Commission) issued draft guidance (Guidance) on reporting serious incidents under Article 73 of the Regulation (EU) 2024/1689 on artificial intelligence (EU AI Act). Article 73 of the EU AI Act requires providers of high-risk AI systems to promptly notify national market surveillance authorities of serious incidents arising from the use of those systems.

The Guidance clarifies the conditions that constitute a “serious incident” and sets out the obligations of relevant actors. The draft also includes a reporting template for submissions to the competent market surveillance authority. Finally, it addresses how the obligations under Article 73 of the EU AI Act interact with reporting duties under other EU legislation.

Companies may submit comments on the draft until 7 November 2025. Following the consultation period, the Commission will publish a final version of the Guidance, which is expected to apply from 2 August 2026.

Background: Reporting Obligations for Serious AI Incidents

Article 73 of the EU AI Act requires providers of high‑risk AI systems to report serious incidents to the competent national market surveillance authority and sets out the follow‑up steps after a report is filed, including the initiation of investigations and the implementation of corrective measures.

Article 3(49) of the EU AI Act defines a “serious incident” as an incident or malfunction of an AI system that directly or indirectly results in a serious outcome. Such outcomes include (a) death or serious harm to an individual’s health; (b) serious and irreversible disruption to the management or operation of critical infrastructure; (c) infringements of obligations under EU law intended to protect fundamental rights; or (d) serious harm to property or the environment.

The reporting obligations only apply to providers of high‑risk AI systems. The EU legislator considers these systems to pose a significant risk to health, safety, or fundamental rights. “Providers” are entities that develop, or have developed, a high‑risk AI system and place it on the market or put it into service under their own name or trademark (see Article 3(3) of the EU AI Act).

Separately, Article 55(1)(c) of the EU AI Act similarly requires providers of General Purpose AI (GPAI) models with systemic risks to notify the Commission’s AI Office and national competent authorities about serious incidents. The Guidance does not currently address that similar obligation, however, entities subject to both reporting regimes should align their reporting processes where possible and may use the Guidance as a reference point. The Commission’s GPAI Code of Practice also contains provisions on the reporting duty under Article 55(1)(c) that companies should consider when designing their reporting workflows.

What is New? Broad Definitions and Scope

The Guidance refines the scope of Article 73 of the EU AI Act to clarify the circumstances that trigger a reporting obligation.

In the Commission’s view, an indirect causal link between the AI system and the harm is sufficient to establish a duty to report. For example, an incorrect medical analysis provided by an AI system, which results in harm only after a subsequent clinical decision, would still qualify as a reportable incident. The same approach, according to the Commission, applies to a loan denial that was based on a flawed AI assessment or disadvantageous treatment of qualified applicants through a discriminatory, AI‑enabled selection procedure. While these are non‑exhaustive examples from the consultation draft, they signal that the Commission envisions a broad scope for the reporting requirements.

The Commission also proposes a simplified reporting regime for high‑risk AI systems in sectors with existing, equivalent reporting obligations, such as critical infrastructure under the NIS‑2 Directive (2022/2555). In those sectors, the Article 73 EU AI Act reporting obligation would apply only to fundamental rights violations; all other incidents should be reported under the relevant sector‑specific rules.

Reporting Timelines and Investigation Duties

Reporting timelines are tight. Companies must notify without undue delay and, in any event, within 15 days of becoming aware of a serious incident. The deadline is 10 days if a death may have been caused and two days for widespread infringements or a serious and irreversible disruption of critical infrastructure.

To meet these deadlines, Article 73(5) of the EU AI Act allows providers to submit an initial, incomplete report with supplemental information to follow. After reporting, the provider must promptly investigate the incident, as outlined in Article 73(6) of the EU AI Act. The Guidance further specifies these investigative duties, including a prohibition on altering the AI system during the investigation without informing the authorities if those changes could affect the subsequent analysis of the incident.

Enforcement Risks and Recommended Actions

Reports of serious incidents often trigger market-surveillance measures and further regulatory action by the competent authorities. Such measures may be ordered within seven days of receiving the report (Article 73(8) of the EU AI Act). The measures are governed by Article 19 of the EU Market Surveillance Regulation (2019/1020) and may include product recalls, market withdrawals, or prohibitions on making products available.

The additional reporting template the Commission has released for consultation is extensive. Given the short deadlines, companies should keep documentation up to date so they can submit the required notification promptly when incidents occur.

Non-compliance with reporting obligations carries significant liability risks. The EU AI Act provides for administrative fines of up to €15 million or 3% of worldwide annual turnover, whichever is higher. Additional supervisory measures may include warnings, orders, distribution restrictions, withdrawals, and the public disclosure of enforcement actions. Companies may also face civil claims from affected users.

Companies should therefore review their reporting processes early and integrate the new obligations into their incident-reporting frameworks. This includes clear incident-response protocols, monitoring to detect potential serious incidents, measures to preserve evidence, and defined internal roles and responsibilities for reporting. These processes should be closely aligned with reporting obligations under other regulatory regimes, such as the GDPR, and informed by best practices and experience from product-safety regulation.

Conclusion and Outlook

The Guidance and accompanying reporting template are currently only in draft form. Given the numerous placeholders in the document, the Commission is likely to further refine the scope of Article 73 of the EU AI Act over the course of the consultation.

The public consultation is open until 7 November 2025, and affected companies can submit comments until that date. Latham & Watkins is available to assist clients in implementing or enhancing reporting processes. Companies should also consider submitting practical suggestions to clarify the Guidance and the reporting template.

Endnotes

    This publication is produced by Latham & Watkins as a news reporting service to clients and other friends. The information contained in this publication should not be construed as legal advice. Should further analysis or explanation of the subject matter be required, please contact the lawyer with whom you normally consult. The invitation to contact is not a solicitation for legal work under the laws of any jurisdiction in which Latham lawyers are not authorized to practice. See our Attorney Advertising and Terms of Use.