Italy AI Chatbot Ruling 2026: DeepSeek, Mistral, and NOVA AI Must Now Warn Users When They Are Wrong
Italy's AGCM closed three AI probes without finding infringement. DeepSeek, Mistral, and NOVA AI must now warn users about hallucination risks.
AI chatbots produce confident answers to questions they cannot reliably answer. The response arrives without hesitation, well-structured, fluent, indistinguishable from one that happens to be correct. Italy's competition authority, the AGCM, decided that this matters for consumer protection. On April 30, 2026, it published the results of three separate investigations into DeepSeek, Mistral AI, and NOVA AI, securing binding commitments from each company to warn Italian users, explicitly and at the moment of use, that AI-generated content may be wrong.
No fines were issued. The AGCM closed each case under Article 27(7) of Italy's Consumer Code, applying existing consumer protection law to a problem that AI-specific legislation had not yet reached. The commitments it extracted establish a European first: for the first time, a regulator has required AI companies to tell their users, at the moment of use, that fluency and accuracy are separate things.
What each company agreed to
The three cases were opened separately between April and August 2025. The AGCM's concern in each was the same: that the companies' AI chatbots could produce content that was incorrect, misleading, or entirely fabricated, without signalling that possibility to users at the moment of interaction. The authority framed this as a potentially unfair commercial practice under Articles 20, 21, and 22 of Italy's Consumer Code, particularly in areas where acting on AI outputs could cause direct harm, among them finance, health, and law.
The commitments binding all three companies follow the same structure. Permanent disclaimers must now appear directly beneath chat windows in Italian, warning users that generated content may be inaccurate and providing hyperlinks to further explanation. Pre-contractual information, presented before registration or purchase, must explicitly state that outputs may not always be reliable and should be independently verified.
DeepSeek, operated jointly by Hangzhou DeepSeek Artificial Intelligence and Beijing DeepSeek Artificial Intelligence, committed to the most extensive set of measures. Alongside the standard interface and pre-contractual requirements, it agreed to a full Italian-language translation of relevant disclosures and to internal compliance training workshops.
The company also committed to technical investment aimed at reducing output errors across four identified categories: search-related inaccuracies, rewriting errors, legal hallucinations, and behavioural hallucinations. The AGCM described this as going beyond disclosure to address the underlying phenomenon, while noting explicitly that current technology cannot eliminate hallucinations entirely. DeepSeek's commitments, accepted at the authority's December 16, 2025 session, carried a 90-day implementation deadline and required a full compliance report within 120 days.
NOVA AI, operated by Scaleup Yazilim Hizmetleri Anonim Şirketi, a Turkish company, had to address a second transparency failure beyond hallucination disclosure. The platform provides unified access to multiple underlying AI models, including those from OpenAI, Anthropic, Google, and DeepSeek, through a single interface. The AGCM found that users had no clear understanding of this architecture and could not determine whether they were interacting with a proprietary AI or receiving unprocessed responses from third-party systems. Scaleup committed to making the platform's aggregator character explicit, with a clear disclosure that it does not itself process or combine the outputs of the underlying models.
The consumer protection argument
The AGCM's intervention sits inside consumer protection law, applied to AI without AI-specific legislation to rely on. That choice is deliberate and consequential.
The authority's position is direct. AI chatbots can cause harm when users rely on their outputs in high-stakes areas like finance, health, or legal decisions. Consumer protection law, which has long required disclosure of product risks at the point of use, applies to that harm. The AGCM's specific contribution is about location: the warning must be present inside the chat interface, at the moment the answer appears, not in documents the average user will never read.
The AGCM has been among Europe's most active regulators in applying existing legal frameworks to AI conduct. In July 2025, it opened an abuse of dominance investigation into Meta for integrating Meta AI into WhatsApp in ways that allegedly excluded competing AI chatbots from the service. The authority imposed interim measures in December 2025, suspending the relevant WhatsApp business terms while the wider investigation continues.
What happens next
The three companies have 120 days from each acceptance decision to demonstrate compliance, or the AGCM may reopen proceedings and apply fines of between 10,000 and 10,000,000 euros. Mistral AI's commitments were accepted at the February 17, 2026 authority session and NOVA AI's were accepted at the April 21, 2026 session.
The framework the AGCM established does not depend on new legislation to spread. Consumer protection authorities in other EU member states operate under the same legal structure, and the cases cross three different regulatory home jurisdictions simultaneously — China, France, and Turkey — applying an identical standard to each without requiring coordination with those countries. For the fintech sector, where AI tools are being deployed at scale for financial guidance and advisory functions, this standard now has a concrete definition: a disclaimer buried in terms of service does not satisfy it.
Hallucinations cannot currently be eliminated, only disclosed. Italy has now established, in law, that disclosing them is the company's obligation, not the user's problem to anticipate. A person using a tool that cannot signal its own errors should not be left to discover that on their own.
Editor's note
Every piece published on The Bright Minded goes through careful verification, but mistakes can happen. If you spot an error, have additional information, or want to flag anything, write to rosalia@thebrightminded.com.