New Worlds: Less than meets the eye

“That hints that no matter what our life experience or training, object recognition is hardwired and works the same in all of us.”

April 17, 2016 05:48
4 minute read.
Human brain

An image of the human brain. (photo credit: REUTERS)

Our brains are so good at recognizing objects that we can automatically supply the concept of a cup when shown a photo of a curved handle or identify a face from just an ear or nose. Neurobiologists, computer scientists and robotics engineers are all interested in understanding how such recognition works – in both human and computer vision systems.

Now, new research at Rehovot’s Weizmann Institute of Science and the Massachusetts Institute of Technology (MIT) suggests that there is an “atomic” unit of recognition – a minimum amount of information that an image must contain for recognition to occur. The study’s findings, which recently appeared in the Proceedings of the National Academy of Sciences (PNAS), imply that current models need to be adjusted and that they have implications for the design of computer and robot vision.

Be the first to know - Join our Facebook page.

In the field of computer vision, for example, the ability to recognize an object in an image has been a challenge for computer and artificial intelligence researchers. Prof. Shimon Ullman and Dr. Daniel Harari, together with Liav Assif and Ethan Fetaya, wanted to know how well current models of computer vision are able to reproduce the capacities of the human brain.

For this purpose, they assembled thousands of participants from Amazon’s Mechanical Turk (which gives businesses and developers access to an on-demand, scalable workforce) and had them identify a series of images. The images came in several formats – some were successively cut from larger images, revealing less and less of the original, while others had successive reductions in resolution, with accompanying reductions in detail.

When the scientists compared the scores of the human subjects with those of the computer models, they found that humans were much better at identifying partial- or low-resolution images. The comparison suggested that the differences were also qualitative: Almost all the human participants were successful at identifying the objects in the various images, up to a fairly high loss of detail – after which, nearly everyone stumbled at the exact same point.

The division was so sharp, the scientists termed it a “phase transition.” If an already minimal image loses just a minute amount of detail, everybody suddenly loses the ability to identify the object, said Ullman. “That hints that no matter what our life experience or training, object recognition is hardwired and works the same in all of us.”

The researchers suggest that the differences between computer and human capabilities lie in the fact that computer algorithms adopt a “bottom-up” approach that moves from simple features to complex ones. Human brains, on the other hand, work in “bottom-up” and “top-down” modes simultaneously, by comparing the elements in an image to a sort of model stored in their memory banks.

The findings also suggest that there may be something elemental in our brains that is tuned to work with a minimal amount – a basic “atom” – of information. That elemental quantity may be crucial to our recognition abilities, and incorporating it into current models could improve their sensitivity. These “atoms of recognition” could prove valuable tools for further research into the workings of the human brain and for developing new computer and robotic vision systems.


Prof. Judea Pearl, the father of US journalist Daniel who was kidnapped and murdered by Pakistani terrorists in 2002 while working on a story in Pakistan, has an excellent reputation in his own right.

An electrical engineering alumnus of the Technion-Israel Institute of technology in Haifa, he has done pioneering research leading to the development of knowledge representation and reasoning tools in computer science. For this work, he recently received the prestigious award from Pittsburgh’s Carnegie Mellon University – the 2015 Dickson Prize in Science. The prize, which includes a medal as well as a monetary award of $50,000, is given annually by the university to Americans who have made an outstanding significant contribution to science. Pearl announced that he will be donating a portion of the prize money to the Technion, where he completed his bachelor’s degree.

After completing his B.Sc. at the Technion, Pearl went on to pursue his master’s degree in physics at Rutgers University and a doctorate in electrical engineering from the Polytechnic Institute of Brooklyn. In 1970, he became a faculty member at the University of California at Los Angeles and currently directs that university’s Cognitive Systems Laboratory and heads research in artificial intelligence, human cognition and philosophy of science. His work on reasoning and uncertainty laid the groundwork in computerized systems, with far-reaching applications in a wide range of fields, such as security, medicine, genetics and language understanding.

He is a member of the US National Academy of Sciences and the National Academy of Engineering, a founding fellow of the Association for the Advancement of Artificial Intelligence and a member of the Institute of Electrical and Electronics Engineering.

In 2011, Perl received the A.M. Turing Award, considered the “Nobel Prize of computing,” and then the Technion’s Harvey Prize in recognition of significant contributions in the advancement of humankind in the areas of science and technology, human health and peace in the Middle East.

Related Content

MEDICAL STAFFERS at Jerusalem’s Hadassah-University Hospital in Ein Kerem discuss yesterday’s call t
May 24, 2018
Paramedical workers in hospitals and clinics will strike Sunday