(photo credit: )
How structures in the parietal cortex - the upper back part of the brain - become active during perception of numerical symbols has been disclosed in unprecedented detail by scientists at Ben-Gurion University of the Negev and colleagues in Germany. Their findings appear in a recent issue of Neuron.
The article sheds light on how the brain processes numerical information - both abstract quantities and their representations as symbols. These findings, they said, will lead to studies of how numerical representation develops in children. Such studies could aid in rehabilitating people who suffer from dyscalculia - the inability to understand or manipulate numbers.
The team included Dr. Roi Cohen Kadosh, Kathrin Cohen Kadosh and Prof. Avishai Henik, who is dean of BGU's faculty of humanities and social sciences from BGU's Zlotowski Center for Neuroscience, and Prof. Rainer Goebel and Dr. Amanda Kaas, researchers from Maastricht University and the Max Planck Institute for Brain Research.
They conducted experiments demonstrating that the two hemispheres of the parietal lobe function differently in processing numbers. While the left lobe is responsible for abstract numerical representations, the right shows a dependence on the notation used. The researchers concluded that "results challenge the commonly held belief that numbers are represented solely in an abstract way in the human brain." The authors also concluded that their results "advocate the existence of distinct neuronal populations for numbers, which are notation-dependent in the right parietal lobe."
In their experiments, the researchers also used the adaptation phenomenon - the fact that the brain adapts to repeated stimuli by reducing its initial activity.
They hypothesized that if the assumption of an abstract representation of numbers in the parietal cortex held true, adaptation would be observed within and across notations. In contrast, in the case of non-abstract numerical representation, they expected that adaptation would be modulated by the notation type. This result would suggest that distinct neuronal populations for notation exist."
Their analysis revealed an effect of notation in the right parietal lobe, showing that this region appears to harbor neurons that process non-abstract numerical representations, in addition to neurons that code for abstract representations.
The researchers said that exploring how the processing of numerical symbols develops could have clinical implications. "Developmental studies should focus on tracing the emergence of numerical representation in the brain, investigating in particular at which stage such a representational divergence appears. Such findings could contribute significantly both to the field of numerical cognition research and rehabilitation of people suffering from developmental dyscalculia," they wrote.
NEVER FORGET A FACE
New research from Vanderbilt University in Nashville suggests that we can remember more faces than other objects, and that faces "stick" the best in our short-term memory. The reason may be that our expertise in remembering faces allows us to encode or package them better for memory.
Kim Curby, the study's primary author and a post-doctoral researcher at Yale, likens such encoding to packing a suitcase. "How much you can fit in a bag depends on how well you pack it," she said. The findings will be published in Psychonomic Bulletin and Review. The research has practical implications for the way people use visual short-term memory (VSTM), since being able to store more faces in VSTM may be useful in complex social situations. Short-term memory is crucial if we are to perceive a continuous world. For example, in order to understand this sentence, your short-term memory must remember the words at the beginning while you read through to the end. VSTM helps us process and briefly remember images and objects rather than words or sounds. VSTM allows us to remember objects for a few seconds, but its capacity is limited. The new research focuses on whether we can store more faces than other objects in VSTM.
Study participants looked at up to five faces on a screen for varying lengths of time (up to four seconds). A single face was later presented and participants decided if this had been part of the original display. For a comparison, the process was repeated with other objects, like watches or cars.
When participants studied the displays for only a brief time (half a second), they could store fewer faces than objects in VSTM. This is apparently because faces are more complex than watches or cars, and require more time to be encoded. Surprisingly, when participants were given more time to encode the images (four seconds), an advantage for faces over objects emerged.