Cultural prism: Very real artificial intelligence

A self-aware machine may someday set its own goals and strive to achieve them, contradicting human interests, justifying drastic means, and even fighting for self-preservation.

THE WORLD’S top Go player, Lee Sedol, and Demis Hassabis, the CEO of DeepMind Technologies and developer of AlphaGO, arrive at an award ceremony for the Google DeepMind Challenge Match against Google’s artificial intelligence program AlphaGo in Seoul, South Korea, in March. (photo credit: REUTERS)
THE WORLD’S top Go player, Lee Sedol, and Demis Hassabis, the CEO of DeepMind Technologies and developer of AlphaGO, arrive at an award ceremony for the Google DeepMind Challenge Match against Google’s artificial intelligence program AlphaGo in Seoul, South Korea, in March.
(photo credit: REUTERS)
 Self-aware computers that turn on their human masters, such as Skynet in Terminator and HAL 9000 in 2001: A Space Odyssey, are a classic science fiction theme.
With great strides being made in artificial intelligence, the gap between such far-fetched scenarios and reality is closing. Fast.
The vision of artificial general intelligence (AGI), or strong/full AI, is not only to partially simulate human intellect, but match and surpass it. This seemingly implausible threshold, also known as Singularity, may be achieved in the not-very-distant future.
This is the era of machine learning. Instead of programmers defining linear algorithms and conditional if-then statements, raw data is processed in multiple iterations of recursive fine-tuning. The more exposure to inputs, the more the system “learns,” just like living creatures do.
A fascinating method of machine learning is neural networks, which mimic biological nervous systems. Multiple layers of variables calculate inputs by weighted scores, and pass the result onward. The weights are constantly tweaked, driving the result closer and closer to a desired outcome. This form of function approximation has the computer teaching itself and constantly improving.
Pattern recognition is a classic. If a neural network is shown enough pictures of cats, it will be able to intake parameters of a new image and ascertain with high probability whether or not it is cat.
Popular perception of AI advancement is the ability of computers to beat human players in complex games. History was made in 1997, when IBM’s Deep Blue beat Garry Kasparov in chess. But in retrospect, there’s not much “intelligence” in mapping all possible combinations and choosing the best path.
But in March 2016, Google DeepMind’s AlphaGo beat Lee Sedol at the traditional Chinese board game of Go. As it is impossible to calculate all combinations of Go, traditional brute-force methodologies were augmented by a combination of neural networks that learned how to play the game and wisely navigated the game tree. Although AlphaGo probably did not feel happy when it won, this was certainly one step closer to real AI.
Machine learning is all around us. Intelligent personal assistants, speech and face recognition, autonomous cars, human genome research, intelligence gathering, search engine optimization, bioinformatics – we are already using this technology every day.
Dror Ben-David, Head of Neural Networks R&D Labs (NRDL) at Matrix, opened my eyes to this field and showed me some of the visionary and revolutionary stuff they’re working on, leaving me hyped, but also concerned.
With so many cool dimensions and applications, it’s hard to imagine the potential risks. But although computer intelligence is called “artificial,” the dangers are very real.
A self-aware machine may someday set its own goals and strive to achieve them, contradicting human interests, justifying drastic means, and even fighting for self-preservation. When an entire brain is someday cloned (whole brain emulation), the simulated computer-brain may believe that it is real.
It is a mistake to view AI as something that just happens inside computers. In the age of the Internet of Things (IoT), the world will be a network of interconnected networks – learning, collaborating and utilizing direct access to, well, everything. An extreme scenario could be a system which decides to eradicate the human race by printing and distributing a deadly virus.
We hope that the good guys are developing and employing AI capabilities responsibly. But we must also assume that negative entities may be promoting destructive tools, and that no matter who develops them, they may be exploited by others, or simply “decide” to act on their own, for their own benefit.
Our limited human bodies and minds may constrain our ability to predict these negative trends, but scientists and innovators have been voicing concerns.
“Without setting norms which will guide positive development for mankind,” warns Ben-David, “we may descend into an uncontrolled and uninhibited race.”
OpenAI, a non-profit AI research company, warns that incorrectly built or used AI may be exploited for “potentially malicious ends.”
Entrepreneur Elon Musk referred to AI as “our biggest existential threat,” and warned that “with artificial intelligence we are summoning the demon.”
Professor Stephen Hawking said that “the development of full AI could spell the end of the human race,” and that “humans couldn’t compete and would be superseded.”
These are not extremists or alarmists, but realists.
Yet not all seem to share these concerns. Stanford’s One Hundred Year Study on Artificial Intelligence has concluded in a recent report that there is “no cause for concern,” and no threat to humankind is likely in the near future.
Tech giants, which invest huge efforts and resources in AI and machine learning, seem to be totally at ease. Or at least they say they are.
“We should not be afraid of AI,” said Mark Zuckerberg, emphasizing the “good it will do in the world.” Even when predicting that computers may surpass humans within a decade, he limited this to better sensors, and stressed that it “doesn’t mean that the computers will be thinking or be generally better.”
Zuckerberg may be intentionally playing down the matter in order to suppress resistance and ensure free rein.
For even before apocalyptic scenarios, there are already troubling trends. Computers process what we write, where we go and even what we say, and manipulate feeds and commercials. In a way, the Internet has greatly diverged from its original romantic concept of an open and free network. We are controlled by machines.
So AI is both wonderful and scary. What can we do?
On a personal level, it is recommended to read and keep up with what’s going on. I find it troubling that so many people are unaware of what AI is and how much it influences their lives.
We should embrace the positive aspects of AI, but never underestimate the potentially negative.
Technological innovation cannot, and should not, be banned, but steered and controlled. AI should be treated like atomic energy, with both its positive and devastating potential.
We cannot allow a small group of interested parties to dominate the technology that controls our lives, even if they claim that they are doing it “to make the world a better place.”
Unbiased, nonprofit, international forums should collaboratively promote responsible AI development. Governments must keep ahead of the game, regulate development and implementation, maintain legal and ethical boundaries, and ensure transparency and accountability.
Despite the open-source trend, we should consider tightening the grip on core capabilities, and deny them from rogue states and entities.
It is time to internalize that there is no longer such a thing as privacy.
Will a super-intelligent, self-aware computer ever out think and outsmart humans? Or will superb data analysis always fall short of human conscience, emotions and intuitiveness?
Perhaps superior artificial intelligence will save our planet from the destructive path humans are taking. We may thank them some day.
The writer is founder of Cross-Cultural Strategies Ltd. www.CCSt.co.il