The Fifth Generation of Machine Learning

These past few years I’ve been working on machine intelligence, artificial intelligence, machine-augmented reality and human-assisted machine intelligence. All have their strengths, but all have significant weaknesses, so several months ago I started to look at neuromorphic computing. It was evident from the beginning that the technology is advanced relatively cheaply to enable this new form of machine learning. While home assistance is used for the simple home tasks we are ending the fifth generation of machine learning which is based on the way our minds work. Even though our Synaptics operators are very slow speed within her mind, they perform an incredible amount of processing because of their massive parallel functioning. Billions of neurons communicate with each other through our synaptic connections.
Well, prior machine learning tried to learn to learn by studying data, the newest technology provides experiential knowledge which is new and is being investigated by Dr. Modha at the IBM Corporation and Intel with their new project Loihi which was introduced at CES 2018. Supercomputing while interesting in this space performs very poorly.Highly parallel chips from either IBM, Intel, Nvidia, ARM and some of the ODM’s had made me realize that a network version of multiple chips to simulate the human brain are much more efficient. I’ve also tried to see whether the multistage quantum computing model would work. Our minds are much more efficient because of its parallelism then today’s computer architecture. So the new concept of neuromorphic computing will be transposed into chips that are loosely connected through very fast interconnects.
The human brain has billions of neurons wired together and to create the ability to create a model that mimics  256 neurons and 64,000 synapses. The problem is that the synapses do not perform in Boolean fashion. So thanks to the scientists the concept of spikes which are represented in the Intel and IBM chip defines a new model of computing, unlike the von Neumann model. These chips must be able to deal with spatial recognition, speech, and sense information. They must be able to work as a single core or in a more significant network so that the computation happens in clusters. In the future we must bring this down to a smaller chip capable of being put into an Edge computer or even a handheld device.
In 1986 we started down the path of neural computing and that we can execute the transformation. It will make the current machine learning look like medieval analysis. IBM use the chip in the True North project. Upcoming posts will enable you to see all the steps required to do this new form of computing. A lot of this work comes from research done by the Technion in Haifa Israel.