EU regulation sets the stage for continued growth of Israeli AI companies

The AI Regulation extends a broad regulatory scope that will mean it impacts providers, users, distributors, importers, or resellers of AI, including Israeli companies innovating AI.

EUROPEAN EXECUTIVE Vice President Margrethe Vestager speaks at a media conference on the European Union’s approach to artificial intelligence, in Brussels, in April. (photo credit: OLIVIER HOSLET/REUTERS)
EUROPEAN EXECUTIVE Vice President Margrethe Vestager speaks at a media conference on the European Union’s approach to artificial intelligence, in Brussels, in April.
(photo credit: OLIVIER HOSLET/REUTERS)
On April 21, the European Commission published the long-awaited proposal for a Regulation on Artificial Intelligence. The proposed AI regulation introduces a first-of-its-kind, comprehensive, harmonized, regulatory framework for artificial intelligence. For Israeli companies innovating AI, this serves as a major step toward providing some legal certainty that’s needed to facilitate further investment. It will also impact Israeli companies that are using AI and looking to do business with customers in the EU, as the new rules will place direct regulatory burdens on certain classifications of AI technologies.
Broad regulatory scope
The AI Regulation extends a broad regulatory scope that will mean it impacts providers, users, distributors, importers, or resellers of AI that are either placing AI systems on the market, putting AI systems into service, or making use of AI systems, within the EU.
Israeli companies developing, selling or using AI systems which have a nexus to Europe will be governed by this regulation, whether the systems themselves are located in Israel or elsewhere.
Classification of AI
The AI regulation will introduce a tiering of regulatory requirements, with higher levels of control applying to different AI systems, depending on the inherent risk associated with each system.
The most restricted level applies to prohibited AI practices. These are AI applications that the EU has determined as being particularly intrusive and must not be allowed to take place. Prohibited AI practices include AI used for social scoring, large-scale surveillance (with notable exceptions), adverse behavioral influencing through AI based dark patterns (subliminal techniques beyond a person’s consciousness), and AI-based micro-targeting (exploiting the vulnerabilities of a specific group). There is no scope to sell AI systems that fall within the prohibited classification level in the EU.
The second type of classification relates to high-risk AI systems. These are technologies anticipated to present significant risk of harm. These systems are permitted, but only on a restricted basis where specific regulatory controls are in place to support safe use. The AI regulation includes a list of high risk AI systems that may be expanded by the European Commission in due course and that covers a wide range of applications including AI systems deployed in relation to credit scoring, essential public infrastructure, social welfare and justice, medical and other regulated devices and transportation systems.
IF AN AI technology falls within the categories above, the regulatory controls that have to be adopted include:
Advertisement
• Transparency to users about the characteristics, capabilities, and limitations of the technology.
• Reporting of serious incidents to market surveillance authorities.
• Establishment, implementation and documentation of a risk management system to assess, monitor and review risks, both before placing the system for sale and then on an ongoing basis
• Ensuring any data sets used to support training, validation and testing of AI are subject to appropriate data governance and management practices to mitigate the risk of bias, discrimination or other harm.
• Ensuring effective human oversight over all AI systems, to review outputs and mitigate the risk of bias or other harms.
• Preparing and maintaining complete and up-to-date technical documentation for users
• Registration in an EU database on high risk AI systems.
The third classification is for lower risk AI systems. These are AI systems that fall outside the scope of those identified as high risk and are not deployed for a prohibited practice. These systems are subject to a transparency regime.
Regulatory oversight of the new regime is achieved through the establishment of supervisory and enforcement authorities in each EU member state and the European Artificial Intelligence Board. These bodies are collectively responsible for and conducting market surveillance and control of AI systems and enforcement. Enforcement may include fines under a regime similar to that under the GDPR privacy regime – in this case up to €30 million or (if higher) 2%-6% of global annual turnover.
Israeli companies that are providers of AI into the EU market will need to be familiar with the new regime and prepared to cooperate with EU-based customers and regulators to support compliance, including by providing full access to training, validation and testing datasets, etc. Infringements could be costly even if all sales activity is undertaken offshore from Israel.
The introduction of a new, clear and likely robustly enforced regulatory scheme in one of the largest trading blocs will undoubtedly create a paradigm shift in responsibility for the AI ecosystem, at once providing legal certainty and stability, but also risk for non-compliance for those who do not step up to the new rule. Israeli AI companies will do well to stay ahead of the emerging AI regulatory landscape to build compliance by design into systems from now in order to secure further investment and maintain market leading growth in this fast-moving industry.
Andrew Dyson is a partner at the DLA Piper Intellectual Property and Technology Group, where he co-chairs the firm’s global data protection, privacy and security practice. Ron Feingold is an intern at the DLA Piper Israel Country Group.