EU unveils draft code of conduct for general purpose AI
The Code of Conduct is intended to guide the development of reliable AI models in the EU, following the EU AI Act.
Published on November 15, 2024
The European Commission has released the first draft of its General Artificial Intelligence Code of Conduct to guide the development of trusted AI models in the EU. Drafted by independent experts and based on multi-stakeholder consultations, the code addresses critical issues such as transparency, copyright enforcement, and systemic risk assessment. The code follows the EU AI Act, a regulation that established a risk-based approach to AI applications.
The AI General Purpose Code of Practice is a European Commission initiative to establish detailed guidelines for the development and application of AI models. The aim is to provide a clear framework for innovation while respecting EU laws and values. This code translates the principles of the AI law into actionable measures so that companies have clarity on legal requirements while protecting the rights of citizens. The need for such a code stems from the rapid developments in AI technology, which pose potential risks of copyright, abuse and discrimination.
The code, which will take effect in August 2025, promises to balance innovation with safety and ethical concerns. With nearly 1,000 stakeholders involved in drafting the code, this is an important step toward harmonized AI regulations in Europe and potentially a global standard for responsible AI development.
Addressing systemic risks
The Code of Conduct aims to harmonize AI regulations in the EU, countering the fragmented approach in other regions, such as the US. By creating a unified regulatory framework, the EU seeks to spread the regulatory costs across the AI value chain and thus reduce the burden on companies. This harmonization is seen as an important benefit, potentially putting Europe at the forefront of responsible AI development and implementation.
A crucial aspect of the Code is its focus on systemic risk assessment and mitigation. AI models, especially general purpose models, can have far-reaching consequences. The Code describes methodologies for identifying and managing these risks and ensures that AI development is consistent with societal values and ethical standards. This is especially important in an era when AI is increasingly being integrated across industries, requiring guidelines that ensure transparency and accountability.
One of the main goals of the code is to strike a balance between encouraging AI-driven innovation and ensuring the protection of fundamental rights. By setting clear goals and measures, the code supports the growth of the AI security ecosystem while maintaining flexibility for the evolving nature of the technology. This approach is designed to encourage innovation within a framework that prioritizes safety, ethics and societal values.
The EU AI Act enters into force today
A few months after its approval, the EU AI Act takes effect today. The regulation is the first attempt to regulate AI employing a risk-based approach.
Future-proofing AI development
The Code of Conduct is designed with the future in mind so that it remains relevant as AI technologies evolve. This means that the code is aligned with international standards and proportionate to the size of AI model providers. By addressing the needs of both large companies and smaller enterprises, the code aims to create an environment conducive to sustainable AI innovation that benefits the broader AI community and society as a whole.
With its comprehensive approach, the EU General Purpose AI Code of Conduct has the potential to become a global standard for responsible AI management. By integrating legal, ethical and practical considerations, the code not only addresses current challenges but also anticipates future developments in AI technology. This proactive stance puts the EU at the forefront of AI regulation and influences the global debate on the safe and ethical use of AI.