Israel's ex-cyber, space chief: AI won't replace humans anytime soon

Yitzhak Ben-Israel told The Jerusalem Post that artificial intelligence like autonomous cars and ChatGPT won't replace humans and AI is hacking-neutral.

Artificial intelligence (photo credit: PIXABAY/WIKIMEDIA)
Artificial intelligence
(photo credit: PIXABAY/WIKIMEDIA)

ChatGPT and artificial intelligence will not replace humanity anytime soon, Yitzhak Ben-Israel told The Jerusalem Post in an interview on the sidelines of his Tel Aviv University AI conference this week.

Ben-Israel, who founded the Israel National Cyber Directorate, led the Israel Space Agency for 17 years and served as a major-general in key IDF positions, said, “What could be and what is practical” and likely are two different things.

Using autonomous cars as an example, he said it “has been proven for 10 years already that autonomous cars drive better than people. So why aren’t they filling up the streets? It is not a problem with the price.

“It will not happen yet because there are problems,” he suggested. “No one wants a car accident that would kill someone. But autonomous cars still lead to fewer accidents and kill fewer people. It is the regulator who is afraid, and we humans are afraid to use technology because we cannot see what it will do.”

Humans won't sign off on advanced AI if it's too smart

He continued, “This is a strange concept. Until we can see where it will lead, we will not sign off. Why not? It is just human psychology. To get a sign-off for a regular car or a washing machine, you take a test that fulfills certain criteria and then you are approved. But if it [AI combined with a machine] is intelligent and it learns from its own experiences, which changes its conduct from the starting point – then we do not allow this.”

 Maj.-Gen. (Ret.) Prof. Isaac Ben-Israel, AI Week Online. Chairman; Director at Blavatnik ICRC, Tel Aviv University; Co-Head of Israel's AI Initiative. (credit:  YUVAL NE'EMAN WORKSHOP FOR SCIENCE, TECHNOLOGY AND SECURITY )
Maj.-Gen. (Ret.) Prof. Isaac Ben-Israel, AI Week Online. Chairman; Director at Blavatnik ICRC, Tel Aviv University; Co-Head of Israel's AI Initiative. (credit: YUVAL NE'EMAN WORKSHOP FOR SCIENCE, TECHNOLOGY AND SECURITY )

Ben-Israel noted that people are constantly having children who are far more unpredictable than AI, without knowing what negative actions their kids might take, and with no need to get any kind of license.

Next, he said, “The concerns about ChatGPT machines are that if they get more intelligent and become more like people, there is a greater suspicion that they will act badly.

“Will they be intelligent like us? In general, it will take many more years for two reasons. Our human brains are basically quantum computers living in a quantum, changing world that is not binary. It is not just black or white. Sometimes we partially want something and partially do not.

“That is not a yes/no dynamic. ChatGPT is still binary and therefore still more limited. But this will be overcome in five to 10 years when we get quantum computers,” he stated.

Advertisement

Explaining further, Ben-Israel said, “My brain’s ‘processor’ is smarter than ChatGPT. It needs a few liters of water, a little bit of food and then it [the human brain and body] just works.”

In contrast, he stated, “Computers and the cloud that use ChatGPT have huge needs, especially in using up energy, and they also get so hot that there needs to be a special setup to put them under water sometimes to cool them off. This is a problem of technology which we have not even started to deal with.”

Even the most complex AI can be unplugged

Moreover, Ben-Israel said even the most complex AI can be relatively easily unplugged.

“If you take it out [unplug it from its power source], then there will be no ChatGPT anywhere in the world. The ‘brain’ that does this – I can unplug it and ChatGPT can’t live without it – the giant computer is not like humans – ChatGPT. If it does something that is not good, you can unplug it,” he said.

Next, he stated, “Maybe there might be two plugs. So then you unplug it twice. Unless the energy and its capacities can be put into a box,” which individuals can easily own and move around with, “people will ignore it.”

“When will it be practical to see robots walking around in the streets like people? It will take dozens of years. But it can happen because there is no reason for it not to happen unless people don’t let it happen,” he declared.

Moving on to using AI for hacking, he said the technology is neutral.

AI is neutral for hacking

“You can use AI technology to improve our world – in health, transportation or in whichever area you want. You can also use it to help bad actors. You can give ideas to bad actors about how someone performs their [cyber] defense in order to find a way around it. It all depends on the user,” he said.

“Potentially, it would replace us on many platforms. Humanity will be even more dependent on computers. We are already dependent. [Even] cars have a computing processor. We depend on different levels of processing. We are more vulnerable. If someone gets into the middle of the system and disrupts the connection, we will be harmed more today. As AI develops, the need for cybersecurity will become bigger.”

“Potentially, it would replace us on many platforms. Humanity will be even more dependent on computers. We are already dependent. [Even] cars have a computing processor. We depend on different levels of processing. We are more vulnerable. If someone gets into the middle of the system and disrupts the connection, we will be harmed more today. As AI develops, the need for cybersecurity will become bigger.”

Yitzhak Ben-Israel

Ben-Israel said that 2015 was the beginning of a paradigm shift in starting to use AI along with cybersecurity.

Back then, he said, “If you wanted to defend a network to make sure there was no virus or malware attacking you, you needed to identify them. How do you know whether there are bits and codes of malware or not in a link or a file?

“At the start, the only way was to look at the bits and codes to see the inner content [on a separate secured system] and to see whether there was malware or not. But then you harm the privacy of your citizens and you do not want to do this either.”

More positives and negatives of AI

Today, on the positive side of AI, he said, “You can use a computer that learns how the bits and codes act online. Then you do not need to decode. You look at how the bits, codes and viruses act differently from others and you start to filter out anything which is acting normally.

“This helps a lot. It eliminates the phenomenon of harming privacy. You only need to do a very limited review about if the files are acting like a virus,” without actually having to break them down as in the past, said Ben-Israel.

However, on the negative side, “AI machine learning can be used to learn how we figure out what is normal and what is not. So they can trick us based on what we are looking for and change their method of cyberattack.”

He concluded, “The conference is very important. There were many attendees. There is a big protest in Jerusalem. We were worried about attendance, but many people still came. Some of the speakers even publicly identified with the protesters.”