The EU reached a provisional deal on the AI Act on December 8, 2023, following record-breaking 36-hour-long 'trilogue' negotiations between the EU Council, the EU Commission and the European Parliament.
The landmark bill will regulate the use of AI systems, including generative AI models like ChatGPT and AI systems used by governments and in law enforcement operations, including for biometric surveillance.
The final draft maintained the tiered approach regarding the measures control foundational models, including categories from 'low and minimal risk' through 'limited risk,' 'high risk' and 'unacceptable risk' AI practices.
The 'high-risk' AI practices will be strictly regulated, with obligations like model evaluation, assessing and keeping track of systemic risks, cybersecurity protections and reporting on the model's energy consumption.
The provisional agreement also provides for a fundamental rights impact assessment before its deployers put a high-risk AI system on the market.
These include manipulative techniques, systems exploiting vulnerabilities, social scoring, and indiscriminate scraping of facial images.
An automatic categorization as 'systemic' for models trained with computing power above 10-25 floating point operations was also added.
A certain number of AI models and practices will be exempted from regulation.
First, free and open source models will not have to comply with any control measures outlined by the law.
Second, the EU Council introduced several exemptions for law enforcement operations, including the exclusion of sensitive operation data from transparency requirements or the use of AI in exceptional circumstances related to public security.
The EU will require a database of general-purpose and high-risk AI systems to explain where, when and how they're being deployed in the EU, even when it's by a public agency.
EU countries, led by France, Germany and Italy, insisted on having a broad exemption for any AI system used for military or defense purposes, even when the system is provided by a private contractor.
In the final draft, systems used exclusively for military or defense purposes will not have to comply with the Act.
The agreement provides that the regulation would not apply to AI systems used for the sole purpose of research and innovation or to people using AI for non-professional reasons.
Its task will be to oversee these most advanced AI models, contribute to fostering standards and testing practices, and enforce the common rules in all member states.
A scientific panel of independent experts will also advise the AI Office about general-purpose AI models.
An AI Board, which will comprise member states' representatives, will serve as a coordination platform and an advisory body to the EU Commission and will give an essential role to EU member states in implementing the regulation, including the design of codes of practice for foundation models.
The provisional agreement provides for more proportionate caps on administrative fines for SMEs and start-ups in case of infringements of the provisions of the AI Act.
The AI models with 'unacceptable risk' will start to be banned six months after the AI Act enters into force.
Requirements for high-risk AI systems, powerful AI models, the conformity assessment bodies, and the governance chapter will start applying one year after the law has been adopted.
This Cyber News was published on www.infosecurity-magazine.com. Publication date: Mon, 11 Dec 2023 12:30:12 +0000