Neuromorphic computing seeks to mimic the neural structure and operation of the human brain. It involves designing computer chips that work similarly to neurons and synapses in the brain. The goal is to create more efficient computing systems that can solve complex problems like pattern recognition and natural language processing.
Origins
– The concept of neuromorphic computing was first introduced in the 1980s by Carver Mead, a professor at Caltech. He coined the term “neuromorphic” to describe the use of very-large-scale integration (VLSI) systems containing electronic analog circuits to mimic neuro-biological architectures.
– In the 1990s, Mead and his colleagues designed early neuromorphic chips that implemented models of the retina, cochlea, and other sensory systems. However, these early systems were limited in complexity due to the immaturity of chip manufacturing at the time.
Current State of Research
– In recent years, advances in VLSI technology have enabled the creation of more sophisticated neuromorphic chips with millions of artificial neurons and synapses. Major technology firms like IBM and Intel have active neuromorphic computing research projects.
– In 2014, IBM unveiled its TrueNorth chip that has 1 million programmable neurons and 256 million synapses. It is able to run pattern recognition tasks at much lower power than conventional CPUs or GPUs.
– In 2017, Intel introduced Loihi, a neuromorphic chip with 130,000 neurons and 130 million synapses. Loihi is aimed at real-time processing of adaptive and autonomous applications like robotics.
– Universities and research labs around the world are also developing custom neuromorphic chips for different applications, from self-driving cars to medical diagnostics. However, there are still many challenges to overcome before neuromorphic systems can rival biological brains.
Applications
One of the main application areas being explored for neuromorphic chips is machine learning and AI. The low-power event-driven signaling of neuromorphic hardware is well-suited for deep learning models and algorithms.
Neuromorphic systems also hold promise for real-time sensory processing and situation analysis for autonomous robots and vehicles. The spiking neural networks allow for efficient processing of visual, auditory and spatial data.
Other potential applications include data filtering, pattern recognition for medical diagnosis, financial analysis, social behavioral modeling, etc.
Challenges
A key challenge is scaling up neuromorphic systems to match the complexity of biological neural networks which have billions of neurons. Most existing neuromorphic chips only have thousands to millions of artificial neurons.
There are also challenges in programming the desired functions and learning rules into the neuromorphic chips. Most existing systems require hand-tuning of the synaptic connections which is not practical for larger networks.
Integrating the neuromorphic chips with traditional von Neumann architectures and dataflow is also an area of active research.
Startups and Industry Adoption
In addition to projects at IBM, Intel and universities, many technology startups are emerging around neuromorphic computing, such as BrainChip, General Vision, and SynSense.
Large companies like Qualcomm, Samsung, and Bosch are investing in and partnering with neuromorphic startups to eventually bring neuromorphic processors to consumer devices.
Industry adoption is still in early phases. It may take 5-10 more years of R&D before neuromorphic chips begin displacing conventional CPUs for specialized applications. But the potential for low-power intelligence is driving rapid growth and investment in this field.