Deep learning (DL) – a machine-learning technique that teaches computers to learn by example, which comes naturally to humans – can be achieved by choosing the most influential path to the output and not just by learning with deeper networks.
DL classifies tasks using a series of layers. To effectively execute these tasks, local decisions are performed progressively along the layers. But can we perform an all-encompassing decision by choosing the most influential path to the output rather than performing these decisions locally?
In an article just published in Nature magazine’s prestigious journal Scientific Reports and entitled “Enhancing the accuracies by performing pooling decisions adjacent to the output layer,” researchers from Bar-Ilan University (BIU) in Ramat Gan answer this question with a resounding “yes.” Preexisting deep architectures have been improved by updating the most influential paths to the output, they said.
“One can think of it as two children who wish to climb a mountain with many twists and turns,” said lead researcher Prof. Ido Kanter of BIU’s physics department and the Gonda (Goldschmied) Multidisciplinary Brain Research Center. “One of them chooses the fastest local route at every intersection, while the other uses binoculars to see the entire path ahead and then picks the shortest and most significant route, just like Google Maps or Waze. The first child might get a head start, but the second will end up getting there faster.”
According to Yarden Tzach, a doctoral student and one of the key contributors to this research, “This discovery can pave the way for better enhanced AI learning by choosing the most significant route to the top.”
Enhancing existing architectures using global decisions can pave the way for improved AI, which can improve its classification tasks without the need for additional layers.
This exploration of a deeper comprehension of AI systems by Kanter and his experimental research team led by Dr. Roni Vardi aims at connecting the biological world and machine learning, thereby creating an improved, advanced AI system.
So far, they have discovered evidence for efficient dendritic adaptation using neuronal cultures, as well as how to implement those findings in machine learning. This shows how shallow networks can compete with deep ones and finding the mechanism underlying successful deep learning.