Worldwide Regulatory Monitor for Artificial Intelligence
The European Commission published the long-awaited General-Purpose AI (GPAI) Code of Practice on July 10, 2025. This voluntary framework, designed to guide AI model providers in demonstrating compliance with the upcoming EU AI Act, will become effective on August 2, 2025.
Currently, the Code is under review by EU Member States and the Commission. If deemed adequate, it will be formally endorsed and adopted via an EU implementing act, serving as a recognised instrument for AI Act compliance. However, due to tight timelines and ongoing assessments, it is unlikely the Code will be officially approved before the August 2 compliance deadline.
The Code is structured around three main chapters, targeting essential areas for GPAI model providers. These include transparency, intellectual property, and safety and security.
Transparency requirements mandate providers to disclose information about the capabilities, limitations, and intended use of GPAI models to promote responsible deployment and informed use. This applies to all GPAI model providers.
The intellectual property chapter sets out guidance on handling copyright and intellectual property related issues arising from model training data and outputs, promoting clarity and protection of IP rights for all stakeholders.
The safety and security chapter establishes advanced safety and security requirements specifically for GPAI models deemed to present systemic risks, defined currently as those trained with more than \(10^{25}\) floating-point operations (FLOPs). This chapter aims to enhance frontier AI safety by mandating robust risk management and mitigation measures.
Notably, the \(10^{25}\) FLOPs threshold is targeted to capture existing frontier AI models and may evolve as AI technology develops. Providers adhering to the Code's safety provisions receive a "presumption of conformity" with Articles 53 and 55 of the AI Act, significantly incentivising adoption.
The Code is the product of an extensive multi-stakeholder consultation process involving around 1,000 participants from industry, academia, civil society, and governments, making it the most comprehensive framework for GPAI to date.
The EU AI Act, the world's first comprehensive AI regulation, applies to all sectors and classifies AI systems into four risk categories: unacceptable, high, limited, and low or minimal risk. Non-compliance with the EU AI Act can result in penalties ranging from €35,000,000 or up to 7 percent of a company's total worldwide annual turnover to €7,500,000 or up to 1 percent of a company's total worldwide annual turnover.
Enforcement of the EU AI Act involves a combination of authorities, including national competent authorities, an AI Office within the Commission, an AI Board with Member States' representatives, and an advisory forum for stakeholders. The AI Liability Directive creates a rebuttable presumption of causality on the defendant and gives national courts the power to order disclosure of evidence about high-risk AI systems that are suspected of causing damage.
In sum, while not yet legally binding, the EU’s General-Purpose AI Code of Practice sets a globally pioneering standard emphasising transparency, intellectual property clarity, and frontier AI safety, aligning with the imminent AI Act obligations effective August 2025.
- The European Commission released a press release announcing the General-Purpose AI (GPAI) Code of Practice on July 10, 2025.
- The GPAI Code of Practice, designed to demonstrate compliance with the upcoming EU AI Act, will become effective on August 2, 2025, pending endorsement and adoption via an EU implementing act.
- The GPAI Code of Practice is structured into three main chapters, focusing on transparency, intellectual property, and safety and security.
- All GPAI model providers are required to disclose information about the capabilities, limitations, and intended use of their models under the transparency requirements.
- The intellectual property chapter provides guidelines for handling copyright and intellectual property issues related to model training data and outputs.
- The safety and security chapter mandates advanced safety and security requirements for GPAI models that present systemic risks, defined as those trained with more than (10^{25}) floating-point operations (FLOPs).
- Adherence to the Code's safety provisions gives providers a "presumption of conformity" with Articles 53 and 55 of the AI Act, incentivizing adoption.
- The GPAI Code of Practice, following an extensive multi-stakeholder consultation process, is the most comprehensive framework for GPAI to date, in line with the imminent AI Act obligations effective August 2025.