The European Union (EU) has reached a preliminary agreement to regulate generative AI tools. This marks a significant step in addressing the rapidly advancing technology. According to the Washington Post, as the world scrambles to deal with the risks brought about by the rapid development of AI, EU officials reached an agreement on Friday local time with the “Landmark” agreement – the interim agreement on the Artificial Intelligence Act (AI Act). This will be the most comprehensive regulatory agreement on AI in the region and the broadest and most far-reaching bill of its kind to date.
Thierry Breton, the head of the EU’s internal market, published a document on the generative AI that produces content and performs a series of controls. Although the draft legislation still needs to be formally approved by EU member states and parliaments, it marks a key step in EU policy. It will regulate the development and dissemination of machine learning and AI models. It will also regulate their use in education, applications in employment, healthcare and other fields.
Breton took to X to celebrate the feat by the EU. He said
“The EU becomes the very first continent to set clear rules for the use of AI. The #AIAct is much more than a rulebook — it’s a launchpad for EU startups and researchers to lead the global AI race.”
AI regulation in the EU
The EU is considering an approach to regulating generative AI models and systems, as seen by Bloomberg. This approach establishes rules for different foundation models, which are AI systems that can be used in various applications. The levels of the approach will analyze and classify AI systems based on their risk levels, with more or less regulation depending on the identified risks.
However, in the latest proposal, the development of AI will be divided into four categories. The categories will be distinguished by the degree of social risk that each category may bring. The risks are minimal, limited risk, high risk, and prohibited.
- Prohibited: Includes any behaviour that circumvents user consent, targets protected groups, or provides real-time biometric tracking (such as facial recognition).
- High Risk: Includes anything “intended to be used as a security component of a product” or for specific applications such as critical infrastructure, education, legal/judicial affairs and employee recruitment.
At the same time, chatbots like ChatGPT, Bard, and Bing fall into the “limited risk” category.
The European Commission wrote in this agreement that AI should not be an “end” in itself. However, it should be a tool that serves human beings, with the ultimate goal of benefiting human beings. Therefore, AI rules in the EU market or other forms of AI that affect EU citizens should be “people-centred”. It should also make people believe that the technology is used in a “safe and legal manner.”
Gizchina News of the week
AI Act and Regulations
The AI Act is the world’s first comprehensive AI law, aiming to regulate AI in the EU. It aims to ensure better conditions for the technology to create benefits such as improved healthcare, safer and cleaner transport, more efficient manufacturing, and cheaper and more sustainable energy. The law divides AI into categories of risk, ranging from “unacceptable” technologies that must be medium and low-risk forms of AI.
There were some disagreements among EU member states regarding the regulation of generative AI models, known as “foundation models”. Germany, France, and Italy initially opposed directly regulating these models, favouring self-regulation from the companies behind them. However, a compromise was eventually reached, and the EU agreed to landmark rules for artificial intelligence.
The AI Act is a landmark achievement for the EU, as it aims to establish the region as a global hub for trustworthy AI by laying down harmonized rules governing the development, marketing, and use of AI. The regulation has been praised for its potential to protect fundamental rights, democracy, and the environment, while also supporting innovation and making Europe a leader in the field of AI. It represents a significant step towards the regulation of AI.
Implications for Generative AI Tools
The new regulations will have several implications for businesses developing or using generative AI tools in Europe. These include:
1. Intellectual Property Rights: Organizations should be aware of fields of use, territorial restrictions, and sublicensing rights. They should also be aware of ownership of any modifications or improvements, financial conditions, and grounds and consequences of termination.
2. Data Protection: European data protection authorities have made it clear that they don’t interpret the exemptions in the AI Act. Also, generative AI controllers will need to consider how to adequately address and respond to rights access, correction, and deletion.
3. Transparency Requirements: Generative AI systems, like ChatGPT, will have to comply with transparency requirements.
Conclusion
The EU’s preliminary agreement to regulate generative AI tools marks a significant milestone in addressing the challenges posed by rapidly advancing AI technology. The AI Act and compromises reached among EU member states demonstrate the EU’s commitment to ensuring the safe and responsible development and use of AI in the region. As the regulations continue to evolve, businesses operating in the AI space should closely monitor the developments and adapt their strategies accordingly. What do you think about the regulation of generative AI? Is it a good idea? Let us know your thoughts in the comment section below
Author Bio
Efe Udin is a seasoned tech writer with over seven years of experience. He covers a wide range of topics in the tech industry from industry politics to mobile phone performance. From mobile phones to tablets, Efe has also kept a keen eye on the latest advancements and trends. He provides insightful analysis and reviews to inform and educate readers. Efe is very passionate about tech and covers interesting stories as well as offers solutions where possible.