Neuromorphic Computing

When people started to study artificial intelligence, they began to realize that the current machine learning was more left brain which focuses on languages, and analytic thinking.  Finally, two projects have emerged that addresses Neuromorphic computing, an IBM its Nerosynaptic Computing and Intel it's self-learning chips called Loihi.
 The ultimate goal is to address synthesis and pattern recognition which is right brain-inspired. The architecture is aimed at audio,  vision and multisensory fusion with the left brain activities. Both Intel and IBM are using physical chips which are event-driven and different from the approaches that Google and Microsoft have tried to use with tensor mathematics. These chips will also be seen using RISC, and ARM processors. It builds on decades of work being performed in universities such as Caltech, MIT, and the Technion in Israel.  
 These chips that have been built or self-learning and backward chaining. An example is when a child touches the top of the stove and burns its hands it reasons backward from the event. These chips are extremely energy efficient and use the backward chaining over time to learn using an approach called asynchronous spiking. The current machine learning has been using massive data, and we have learned that we have to build models that are specific to a situation to generalize these models.
The neuromorphic chip can adapt to modal situations to be able to determine the appropriate approach to a problem. It is more like a complete brain project that was promised for years. The early results are that spike neural networks achieve a higher degree of accuracy than prior approaches. The chip creates an asynchronous core mesh that supports sparse,  hierarchical and recurrent neural topology.  Intel has used a 14 nm technology with 130,000 neurons and 130 synapses. Each neuron can talk to each other through the chips synapses.
 This, is a capability in transforming innovation at a new class of applications such as multimodal sensory capabilities.  IBM in the research lab has built a chip with 1 million neurons and 256 synapses. It still consumes less real estate and power in the current machine learning technologies with only 4 kW of energy.
The ARM approach is to create parallel processors that are not constrained by connectivity through high-speed synaptic connections that do not bottleneck the current machine learning.
What is the conclusion from this will be in future blogs to discuss the uses of neuromorphic computing.