A deeper look at the EU’s AI Act

Following the EU's approval of the groundbreaking AI Act, take a closer look at the key elements poised to redefine the rules of the artificial intelligence game.

 Artificial Intelligence words are seen in this illustration taken March 31, 2023 (photo credit: DADO RUVIC/REUTERS)
Artificial Intelligence words are seen in this illustration taken March 31, 2023
(photo credit: DADO RUVIC/REUTERS)

In a monumental development, negotiators representing the European Union (EU) reached a historic agreement on Friday, paving the way for the world's inaugural set of regulations governing artificial intelligence (AI). The accord, born out of extensive deliberations between the European Parliament and representatives from the EU's 27 member nations, marks a pivotal stride towards establishing legal oversight for AI technologies.

The landmark agreement, tentatively named the Artificial Intelligence Act, encompasses a wide array of contentious issues, including generative AI and the utilization of facial recognition surveillance by law enforcement agencies. While the European Parliament is yet to formally vote on the act early next year, the completed deal suggests a high likelihood of approval.

Itamar Cohen, partner in the hi-tech department of the Amit, Pollak, Matalon law firm, specializing in privacy protection, technology, and cyber regulation, highlighted the EU’s proactive approach in recognizing the transformative nature of virtual activities. He pointed out that, just as regulations evolved with the introduction of phones and business laws, new regulations are needed to govern the increasingly prevalent use of big data analysis and predictive AI technologies.

A European Union flag flies outside the European Commission headquarters in Brussels, Belgium, December 19, 2019. (credit: REUTERS/YVES HERMAN)
A European Union flag flies outside the European Commission headquarters in Brussels, Belgium, December 19, 2019. (credit: REUTERS/YVES HERMAN)

“Since 2016, with the implementation of the General Data Protection Regulation (EU GDPR), the EU has been actively regulating the cyber sphere, recognizing the shift from real-world activities to a more virtual existence. The EU legislators are keenly aware of the opportunities and risks associated with this transition, adapting and adjusting legislation to address the changes in how people live their lives in this increasingly virtual world,” he said.

The AI Act encompasses a broad definition of AI systems, covering any system based on statistical analysis of big data for predictions. “The scope of the regulation and what it considers an AI system is very broad,” Cohen said. “However, the way we regulate, or the obligations derived from that are connected directly to the risk created by those systems to human life, welfare, privacy, security, and safety.”

Anticipated to take full effect no earlier than 2025, the AI Act introduces strict financial penalties, including fines of up to €35 million ($38 million) or 7% of a company’s global turnover for violations.
The act categorizes AI systems into three main types: prohibited AI practices, general AI systems, and high-risk AI systems. Prohibited practices, such as using biometric identification systems in public areas, are outright banned, with minimal exemptions for national security. General AI systems face relatively minimal obligations, focusing on transparency and disclosure. The most stringent requirements are reserved for high-risk AI systems, covering fields like education, banking, insurance, and medical devices.
Cohen emphasized that the obligations apply to both AI system manufacturers and the applications putting them into the market. From the design stage, companies must ensure the safety and accuracy of their systems, documenting the training process and ensuring the use of lawfully collected, high-quality data. Ongoing management systems are required to identify and mitigate risks throughout the system life cycle.
Advertisement
The specific obligations also highlight the EU’s commitment to fairness. Companies must ensure that their AI systems not only work effectively but also treat individuals fairly, adhering to the principles outlined in the European Bill of Rights and anti-discrimination laws. Striking a balance between system effectiveness and societal fairness presents a significant challenge for companies, particularly in cases where data may introduce biases.
“On the one hand, they’re trying to make sure that any AI system is in a way fair, or treating people well, pushing them toward a better society,” Cohen elaborated. “But on the other hand, if your interface is too affected by that, it might be seen as your system being discriminatory or treating people unfairly. So you need to find the right balance. That’s something that I think will significantly affect AI systems’ design and application as we move forward.” 

 
 
ReplyReply allForward