The European Commission has opened a consultation to gather feedback on forthcoming guidelines “on implementing the AI Act’s rules on high-risk AI systems”.  (For more on the definition of a high-risk AI system, see our blog post here.)  The consultation is open until July 18,  2025, following which the Commission will publish a summary of the consultation results through the AI Office.

For context, the AI Act contemplates two categories of “high-risk” AI systems:

  1. Products—or safety components of products—covered by the EU product safety legislation identified in Annex I, where the product or safety component is subject to a third-party conformity assessment (Art. 6(1)); and
  2. Certain systems that fall within eight categories of use cases identified in Annex III, namely, (1) biometrics; (2) critical infrastructure; (3) education and vocational training; (4) employment, workers’ management and access to self-employment; (5) access to and enjoyment of essential private services and essential public services and benefits; (6) law enforcement; (7) migration, asylum and border control management; and (8) administration of justice and democratic processes (Art. 6(2)). Only certain use cases within each category are considered high-risk—not the entire category itself. In addition, with one exception, the AI systems must be “intended to be used” for the particular use case, e.g., “AI systems intended to be used for emotion recognition”—a use case within biometrics (category one) (id., emphasis added).

Even if an AI system falls within scope of Annex III, it will not be considered high-risk (subject to one exception) if it “does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons” (Art. 6(3)).  Where a provider concludes that Art. 6(3) applies to its system, that provider must document its assessment, and it must still register the system in the EU database for high-risk AI systems (Art. 6(4)).

Providers of high-risk AI systems are subject to a range of obligations, primarily set out in Arts. 8-21 of the AI Act.  Deployers of high-risk AI systems are subject to a separate set of obligations primarily set out in Art. 26 of the AI Act.  The AI Act also contemplates certain scenarios where the obligations of the original provider of a high-risk AI system shift to a downstream actor (the so-called “value chain” obligations) (Art. 25(1)).

The consultation aims to “collect practical examples and clarify issues relating to high-risk AI systems” from a wide variety of stakeholders.  It invites input on five issues:

  • (1) classification rules for Annex I high-risk AI systems, including the “concept of a safety component”;
  • (2) classification rules for Annex III high-risk AI systems, including about each of the eight categories of Annex III use cases, the Art. 6(3) exemption, and the distinction between high-risk AI systems and the prohibited AI practices set out in Art. 5 of the AI Act (for more on prohibited practices, please see our blog post here);
  • (3) general questions for high-risk classification, including the notion of “intended purpose” and “its interplay with general purpose AI systems”;
  • (4) questions related to the obligations applicable to high-risk AI systems and the value-chain obligations; and
  • (5) questions about the need to amend the use cases in Annex III and the prohibited practices in Art. 5.

Obligations on Annex III and Annex I high-risk AI systems are scheduled to apply as of August 2, 2026 and August 2, 2027, respectively.  Providers and deployers of high-risk AI systems should watch this space and stay apprised of other AI Act-related regulatory updates, including EU lawmakers’ comments on delaying enforcement of the AI Act (more on that here).

This blog was drafted with the assistance of Dumitha Gunawardene, a trainee in the London office.

***

The Covington team continues to monitor regulatory developments on AI, and we regularly advise the world’s top technology companies on their most challenging regulatory and compliance issues in the EU and other major markets. If you have questions about AI regulation, or other tech regulatory matters, we are happy to assist with any queries.

Photo of Madelaine Harrington Madelaine Harrington

Madelaine Harrington is an associate in the technology and media group. Her practice covers a wide range of regulatory and policy matters at the cross-section of privacy, content moderation, artificial intelligence, and free expression. Madelaine has deep experience with regulatory investigations, and has…

Madelaine Harrington is an associate in the technology and media group. Her practice covers a wide range of regulatory and policy matters at the cross-section of privacy, content moderation, artificial intelligence, and free expression. Madelaine has deep experience with regulatory investigations, and has counseled multi-national companies on complex cross-jurisdictional fact-gathering exercises and responses to alleged non-compliance. She routinely counsels clients on compliance within the EU regulatory framework, including the General Data Protection Regulation (GDPR), among other EU laws and legislative proposals.

Madelaine’s representative matters include:

coordinating responses to investigations into the handling of personal information under the GDPR,
counseling major technology companies on the use of artificial intelligence, specifically facial recognition technology in public spaces,
advising a major technology company on the legality of hacking defense tactics,
advising a content company on compliance obligations under the DSA, including rules regarding recommender systems.

Madelaine’s work has previously involved representing U.S.-based clients on a wide range of First Amendment issues, including defamation lawsuits, access to courts, and FOIA. She maintains an active pro-bono practice representing journalists with various news-gathering needs.