Europe Reaches a Deal on the World's First Comprehensive AI Rules

European Union negotiators clinched a deal Friday on the world's first comprehensive artificial intelligence rules, paving the way for legal oversight of AI technology that has promised to transform everyday life and spurred warnings of existential dangers to humanity.
Negotiators from the European Parliament and the bloc's 27 member countries overcame big differences on controversial points including generative AI and police use of face recognition surveillance to sign a tentative political agreement for the Artificial Intelligence Act.
Civil society groups gave it a cool reception as they wait for technical details that will need to be ironed out in the coming weeks.
They said the deal didn't go far enough in protecting people from harm caused by AI systems.
The recent boom in generative AI sent European officials scrambling to update a proposal poised to serve as a blueprint for the world.
The European Parliament will still need to vote on the act early next year, but with the deal done that's a formality, Brando Benifei, an Italian lawmaker co-leading the body's negotiating efforts, told The Associated Press late Friday.
Generative AI systems like OpenAI's ChatGPT have exploded into the world's consciousness, dazzling users with the ability to produce human-like text, photos and songs but raising fears about the risks the rapidly developing technology poses to jobs, privacy and copyright protection and even human life itself.
Now, the U.S., U.K., China and global coalitions like the Group of 7 major democracies have jumped in with their own proposals to regulate AI, though they're still catching up to Europe.
AI companies subject to the EU's rules will also likely extend some of those obligations outside the continent, she said.
The AI Act was originally designed to mitigate the dangers from specific AI functions based on their level of risk, from low to unacceptable.
Lawmakers pushed to expand it to foundation models, the advanced systems that underpin general purpose AI services like ChatGPT and Google's Bard chatbot.
Foundation models looked set to be one of the biggest sticking points for Europe.
Negotiators managed to reach a tentative compromise early in the talks, despite opposition led by France, which called instead for self-regulation to help homegrown European generative AI companies competing with big U.S rivals, including OpenAI's backer Microsoft.
Also known as large language models, these systems are trained on vast troves of written works and images scraped off the internet.
They give generative AI systems the ability to create something new, unlike traditional AI, which processes data and completes tasks using predetermined rules.
The companies building foundation models will have to draw up technical documentation, comply with EU copyright law and detail the content used for training.
Researchers have warned that powerful foundation models, built by a handful of big tech companies, could be used to supercharge online disinformation and manipulation, cyberattacks or creation of bioweapons.
Rights groups also caution that the lack of transparency about data used to train the models poses risks to daily life because they act as basic structures for software developers building AI-powered services.
What became the thorniest topic was AI-powered face recognition surveillance systems, and negotiators found a compromise after intensive bargaining.
Rights groups said they were concerned about the exemptions and other big loopholes in the AI Act, including lack of protection for AI systems used in migration and border control, and the option for developers to opt-out of having their systems classified as high risk.


This Cyber News was published on www.securityweek.com. Publication date: Sat, 09 Dec 2023 20:13:04 +0000


Cyber News related to Europe Reaches a Deal on the World's First Comprehensive AI Rules