AI chatbots are transforming how businesses handle consumer inquiries and complaints, offering speed and availability that traditional channels often cannot match. However, the European Commission’s recent Digital Fairness Act Fitness Check has spotlighted a gap: EU consumers currently lack a cross-sectoral right to demand human contact when interacting with AI chatbots in business-to-consumer settings. It is still unclear whether and how the European Commission is proposing to address this. The Digital Fairness Act could do so, but the Commission’s proposal is only planned to be published in the 3rd quarter of 2026. This post highlights key consumer protection considerations for companies deploying AI chatbots in the EU market.
AI Chatbots Cannot Be the Only Contact Channel
Under EU law–particularly the Consumer Rights Directive (“CRD”) and the eCommerce Directive–consumers must have access to traditional communication channels such as the trader’s postal address, telephone number, and email address. The Court of Justice of the EU has made clear that consumers must be able to contact traders directly, quickly, and effectively (Case C-649/17). While chatbots can assist, they cannot replace mandatory human contact options.
AI Chatbots as Supplementary Communication Channels
The CRD requires traders to disclose their primary contact details before concluding a contract, but does not prohibit offering AI chatbots as additional communication tools. Where chatbots enable consumers to retain durable records of their interactions – including timestamps – traders should inform consumers about that. Durable records are defined as information stored in a medium accessible and unalterable for future reference, such as emails or downloadable files.
In any event, certain communications, such as the acknowledgment of a consumer’s right of withdrawal, must be provided in a “durable medium,” ensuring consumers have a stable and accessible record of important contractual information.
Human Oversight and the Right to Human Intervention
EU legislation increasingly emphasizes the importance of human oversight of AI systems, especially high-risk ones:
- The AI Act will, from August 2026, require human oversight of certain high-risk AI systems, such as tools used to determine eligibility for essential public services (e.g., healthcare or social benefits), credit scoring systems affecting access to loans, or AI used in recruitment processes to automatically screen candidates.
- Sector-specific laws in financial services, including the Consumer Credit Directive and the Distance Marketing Directive, guarantee consumers the right to human intervention when dealing with automated systems like chatbots.
- The GDPR protects individuals from decisions based solely on automated processing that produce legal or similarly significant effects, granting a right to human review.
Additional Considerations for Businesses
- AI Transparency and User Rights. Under the AI Act, starting in August 2026, AI providers must inform users when they interact with AI unless it is obvious. AI-generated content must be clearly labeled in a machine-readable way, except for minor edits. Individuals affected by high-risk AI decisions impacting their rights have the right to clear explanations. These rules ensure transparency, accountability, and protection of users.
- Rankings and Ratings. Under the CRD, companies must clearly disclose the main factors that determine the ranking of offers provided by AI chatbots. AI chatbots can be used to search for products, services, or information, presenting results based on algorithms that influence consumer choices. The Unfair Commercial Practices Directive (“UCPD”) prohibits misleading or manipulated reviews and ratings and requires them to be genuine and fairly presented. These rules protect consumers from deception and promote transparency and fairness, also in AI-driven search results.
- Accessibility and Non-Discrimination. Deployers of chatbots must also consider accessibility for all consumers, including those with disabilities, in line with the European Accessibility Act (which starts applying in June 2025) and the avoidance of discriminatory practices based on age, gender, ethnicity, or digital literacy.
- Data Protection and Privacy. Chatbot interactions must comply with GDPR requirements, including transparency about data collection and processing, purpose limitation, and data security.
- Liability and Accountability. Traders remain fully responsible for all communications with consumers, including those conducted through AI chatbots. Businesses should ensure that chatbot responses are accurate. They can be held accountable and liable for any misleading or incorrect information provoded by a chatbot, for example under the new Product Liability Directive, which applies as of December 2026.
- Cross-Border and Multilingual Considerations. Traders should clearly specify the languages in which their chatbots can operate. If automatic translation is used they should inform consumers about the expected level of translation accuracy to manage consumer expectations. Under the UCPD, if a trader communicates with a consumer in a language other than the official language of their country before a purchase, their AI chatbots should provide after-sales service in that same language, unless otherwise stated.
* * *
Covington & Burling continues to monitor these developments closely and advises companies on navigating EU consumer protection law in the age of AI. Compliance with these evolving requirements will not only meet regulatory demands but also build consumer trust in an increasingly digital marketplace.