Skip to content

ChatGPT and similar entities now face mandatory transparency under new EU regulations

ChatGPT and similar artificial intelligence platforms are now obliged to disclose their workings under newly enforced EU regulations.

EU legislation now demands openness from ChatGPT and similar entities
EU legislation now demands openness from ChatGPT and similar entities

EU Implementing Transparency Measures for ChatGPT and Similar AI Models - ChatGPT and similar entities now face mandatory transparency under new EU regulations

The European Union has introduced new transparency rules for General-Purpose AI (GPAI) systems, effective from August 2, 2025. These rules apply to providers such as ChatGPT and Google’s Gemini, aiming to establish a strong framework for accountability, data transparency, and safety without compromising intellectual property rights.

Under the new rules, providers of AI models must ensure comprehensive transparency, safety, and accountability measures. This includes maintaining up-to-date, detailed records covering the AI model’s architecture, training processes, datasets, and evaluation methods. A "sufficiently detailed summary" of the training content must also be published via an AI Office template, explaining the nature and sources of the data used without revealing trade secrets or violating intellectual property (IP) protections.

GPAI models are classified by risk levels. Systems considered to pose “systemic risks” face additional requirements such as risk assessments, incident reporting, cybersecurity measures, and stricter compliance obligations. Existing models have until August 2, 2027, to comply with the new disclosure requirements, while new models entering the EU market from August 2, 2025, face immediate obligations.

The guidelines are periodically reviewed to adapt to technological changes and regulatory experiences. The EU has also released a General-Purpose AI Code of Practice, which supplements the mandatory rules, offering voluntary but influential guidelines related to transparency, copyright compliance, and safety obligations for AI providers.

The new rules balance transparency with IP rights, requiring providers to disclose sufficient details about training data and model design for accountability and to prevent unfair practices, while explicitly protecting proprietary information. Providers must comply with EU copyright laws in sourcing and using training data, respecting copyright when training AI to avoid infringement claims.

The framework includes protocols to reserve IP rights, meaning AI providers retain protection over their core innovations and trade secrets even while adhering to transparency and safety requirements. The guidelines offer providers clearer interpretations of obligations, reducing uncertainty about how transparency and IP protection coexist under the new regime.

Operators of these AI models must also disclose how their systems function and the data used to train them. Developers must specify the measures they took to protect intellectual property. Developers of AI models must report which sources they used for their training data, and rights holders should have a point of contact at the companies according to the EU's guidelines.

The European AI Authority will enforce the AI Act rules from August 2026 for new models, and from August 2027 for models already on the market before August 2, 2025. Fines of up to 15 million euros or three percent of the company's total global annual turnover can be imposed for violations of the AI Act.

Google, developer of the AI Gemini, has announced its intention to sign the codex, despite initial concerns that the AI Act could hinder innovations. However, several national and international alliances of authors, artists, and publishers have criticized that intellectual property is not sufficiently protected in the new rules.

In summary, the EU’s new transparency rules for GPAI systems establish a strong framework for accountability, data transparency, and safety without compromising intellectual property rights. The rules require summary disclosures that reveal underlying data characteristics and risk but protect proprietary information and copyright compliance. The European AI Authority will enforce the rules, offering providers clearer interpretations of obligations and reducing uncertainty about how transparency and IP protection coexist under the new regime.

  1. In adherence to the EU's transparency rules for General-Purpose AI (GPAI) systems, AI providers like Google's Gemini must publish a "sufficiently detailed summary" of their training content, explaining the nature and sources of the data used, while safeguarding intellectual property (IP) rights and trade secrets.
  2. Under the new AI Act, operators of GPAI models and developers of AI models must disclose how their systems function, the data used to train them, and the measures taken to protect intellectual property, with the European AI Authority enforcing these rules from August 2026 onwards, imposing fines for violations up to 15 million euros or three percent of the company's total global annual turnover.

Read also:

    Latest