How is Israel’s government shaping the global future of artificial intelligence?

Experts in the Justice Ministry and throughout the government are working to find the right balance between allowing innovation and protecting the public.

Artificial intelligence (photo credit: INGIMAGE)
Artificial intelligence
(photo credit: INGIMAGE)

Artificial intelligence is one of the foremost fields of technology currently being explored, and while there is no doubt its advancement will greatly benefit humanity, the rampant innovation taking place comes with a host of ethical, humanitarian and legal concerns.

AI is going through its Wild West era, and Israeli regulators are working to promote the fledgling technology’s development while creating sensible safeguards to protect citizens from the potential misuse of its enormous power.

Dr. Yuval Roitman, senior director of the Justice Ministry’s regulation division, said there is a pressing need for AI regulation.

“We understand that AI is here, and it’s going to be more and more substantial in the work of the Israeli government and Israeli economy, on the one hand, and on the other hand, we have a very important AI sector within the private sector,” he said. “We want to do two things: to promote the industry and to protect our citizens.”

As such, the government now features a host of teams with members from a spectrum of bodies, such as the Israel National Cyber Directorate, the Innovation Authority and the Science and Technology Ministry, among others. These teams were created to establish a national AI strategy and are responsible for the use of AI within the government, data privacy, the promotion of innovation in the regulation of AI and ethics in relation to it, among other issues.

 IT SHOULD BE noted that in the training process in the core technology areas, such as chip design, algorithms, software, artificial intelligence, and cyber, is long – there are no shortcuts.  (credit: HADAS PARUSH/FLASH90) IT SHOULD BE noted that in the training process in the core technology areas, such as chip design, algorithms, software, artificial intelligence, and cyber, is long – there are no shortcuts. (credit: HADAS PARUSH/FLASH90)

Also pertinent is a government team developing regulations for frameworks called “sandboxes,” which are limited ecosystems where developers can experiment with technology in a controlled manner.

The Justice Ministry is promoting a general sandbox act “to enable greater flexibility in quickly providing an adapted framework without the need to enact a specific sandbox in each individual case,” Roitman said. This will allow developers to experiment much more freely, which will be greatly relevant to the advancement of artificial intelligence, he added.

ANOTHER KEY player in AI regulation is a task force dedicated to issues of law and technology, headed by Deputy Attorney-General for Economic Law Meir Levin. One of its focuses has been the regulation of artificial intelligence.

Cedric Sabbah, the director for emerging technologies at the Office of the Deputy Attorney-General for International Law and a main contributor to the task force’s operations, said the international conversation on AI regulation was set off a few years ago upon the release of the Organization for Economic Cooperation and Development’s AI Principles, which provided an intergovernmental standard for AI policies.

“In the OECD’s text, you can really see the tension that there was between wishing to enable innovation and allow for all the benefits of AI to be reaped in an innovation-friendly ecosystem and the apprehensions about what a future of AI holds,” he said.

“The way they framed it was that in order for society to truly reap the benefits of AI innovation, society at large needs to have trust that the products that are being put out there are useful and will not come to hurt them,” Sabbah said, adding that Israel was “instrumental to its development along with other OECD countries, and here at the Justice Ministry, we were very much part of it.”

Since the OECD sparked the global AI conversation, Israel has joined several international initiatives created to develop standardized AI regulation, including the Global Partnership on Artificial Intelligence (GPAI) and the Ad-hoc Committee on Artificial Intelligence (CAHAI), which is run by the Council of Europe. Israel joined the CAHAI as an observer state.

“This means that we have a voice at the table, but at the same time, we don’t have a vote,” Sabbah said. “So we can influence indirectly, we can make our voice heard, we can suggest drafting proposals and policy proposals, and we’re still taken into consideration when they try to look at the overall direction that people are supporting.”

DESPITE ISRAEL’S position in these international forums, it still has a vested interest in ensuring that the global standards being developed don’t unduly hamper innovation, which is a key facet of the country’s DNA.

“For us, it’s a big deal because so much of our industry is hi-tech, and so much of that is AI innovations, so we really do want to make sure that our interests are being heard,” Sabbah said.

On that note, he elaborated on one of the current hot-button topics in AI regulation: explainability. At its core, the issue is that as a technology becomes more advanced, it becomes more opaque to those who use it. This relationship creates a potential threat, as that opacity could veil instances of discrimination, fraud or other nefarious activities.

In response to this, there is a push from some governments to require that upon request, any decision made by an AI can be dissected and explained.

A constant availability of explainability, however, can negatively impact a system’s effectiveness. Furthermore, the disclosure of an algorithm’s operations could potentially damage a company’s intellectual property (IP).

“If you make it explainable, there’s a risk for companies that they’re going to have to disclose the secret sauce,” Sabbah said.

These potentially invasive regulatory tools could harm innovation-focused countries such as Israel.

“We need to think about what it actually means if we create an international law obligation to have explainability, because a broad explainability obligation might be difficult to translate in practical and certain terms,” Sabbah said, adding: “That being said, we generally see the value in enhancing algorithmic transparency and providing risk-assessment tools to government and the private sector. We need to examine carefully where the gaps exist and what are the best way to address those gaps.”

According to Tal Werner-Kling, senior director of the international technology law division at the Office of the Deputy Attorney-General for International Law, being a party to international artificial-intelligence initiatives such as CAHAI grants a unique opportunity to create a brand new set of laws for a field that has never been touched before.

“It’s a rare chance because in so many fields of law, the rules are there, and you have to join and comply or decide not to join and pay the price,” she said.

“Right now we have an opportunity to voice an opinion and to try and collaborate and join voices with other states who have other interests,” Werner-Kling said. Often, when we reach out to other states, we can build bridges and coalitions around these types of issues. I think that’s a very important moment in time to be involved in the creation of law.”