On May 7, 2025, the European Commission published a Q&A on the AI literacy obligation under Article 4 of the AI Act (the “Q&A”).  The Q&A builds upon the Commission’s guidance on AI literacy provided in its webinar in February 2025, covered in our earlier blog here.  Among other things, the Commission clarifies that the AI literacy obligation started to apply from February 2, 2025, but that the national market surveillance authorities tasked with supervising and enforcing the obligation will start doing so from August 3, 2026 onwards.

Key considerations from the European Commission’s Q&A on AI literacy are as follows:

  • AI literacy requirements apply to all providers and deployers of AI systems.  In practical terms, it requires organizations to train anyone directly dealing with AI systems.  This requirement covers not only employees, but also, for example, contractors and service providers interacting or using AI systems.
  • There are no specific requirements on what an AI literacy programme should include to comply with AI literacy requirements.  Yet, the European Commission considers that, at a minimum, an AI literacy programme should:
    • ensure a general understanding of AI within the organization;
    • consider the role of the organization (e.g., as provider or deployer of AI systems);
    • take into account the risk of the particular AI systems provided or deployed in the organization; and
    • build AI literacy actions based on the factors listed above, considering, among other things, the staff’s technical capabilities and the context in which the AI systems are used.
  • Organizations are not required to issue a training certificate or similar credentials to prove that staff have completed AI literacy training.  Internal records of trainings and other initiatives are sufficient.
  • Merely relying on AI systems’ instructions for use or asking staff to read them might be ineffective and insufficient to provide an adequate level of AI literacy.
  • Organizations whose employees deploy generative AI systems for tasks such as writing advertising text or translating content must comply with AI literacy requirements, including informing staff about specific risks, such as hallucination.
  • National market surveillance authorities are tasked with overseeing AI literacy compliance.  The AI Act requires Member States to appoint these national market surveillance authorities by August 2, 2025.  Although the AI literacy obligation already applies as of February 2, 2025, supervision and enforcement of this obligation by market surveillance authorities will only start on August 3, 2026. 

*          *          *

The Covington team continues to monitor regulatory developments on AI, and we regularly advise the world’s top technology companies on their most challenging regulatory and compliance issues in the EU and other major markets.  If you have questions about AI regulation, or other tech regulatory matters, we are happy to assist with any queries.

This blog post was written with the contributions of Alberto Vogel.

Photo of Dan Cooper Dan Cooper

Daniel Cooper is co-chair of Covington’s Data Privacy and Cyber Security Practice, and advises clients on information technology regulatory and policy issues, particularly data protection, consumer protection, AI, and data security matters. He has over 20 years of experience in the field, representing…

Daniel Cooper is co-chair of Covington’s Data Privacy and Cyber Security Practice, and advises clients on information technology regulatory and policy issues, particularly data protection, consumer protection, AI, and data security matters. He has over 20 years of experience in the field, representing clients in regulatory proceedings before privacy authorities in Europe and counseling them on their global compliance and government affairs strategies. Dan regularly lectures on the topic, and was instrumental in drafting the privacy standards applied in professional sport.

According to Chambers UK, his “level of expertise is second to none, but it’s also equally paired with a keen understanding of our business and direction.” It was noted that “he is very good at calibrating and helping to gauge risk.”

Dan is qualified to practice law in the United States, the United Kingdom, Ireland and Belgium. He has also been appointed to the advisory and expert boards of privacy NGOs and agencies, such as the IAPP’s European Advisory Board, Privacy International and the European security agency, ENISA.

Photo of Sam Jungyun Choi Sam Jungyun Choi

Recognized by Law.com International as a Rising Star (2023), Sam Jungyun Choi is an associate in the technology regulatory group in Brussels. She advises leading multinationals on European and UK data protection law and new regulations and policy relating to innovative technologies, such…

Recognized by Law.com International as a Rising Star (2023), Sam Jungyun Choi is an associate in the technology regulatory group in Brussels. She advises leading multinationals on European and UK data protection law and new regulations and policy relating to innovative technologies, such as AI, digital health, and autonomous vehicles.

Sam is an expert on the EU General Data Protection Regulation (GDPR) and the UK Data Protection Act, having advised on these laws since they started to apply. In recent years, her work has evolved to include advising companies on new data and digital laws in the EU, including the AI Act, Data Act and the Digital Services Act.

Sam’s practice includes advising on regulatory, compliance and policy issues that affect leading companies in the technology, life sciences and gaming companies on laws relating to privacy and data protection, digital services and AI. She advises clients on designing of new products and services, preparing privacy documentation, and developing data and AI governance programs. She also advises clients on matters relating to children’s privacy and policy initiatives relating to online safety.