Neuromorphic computing aims to use computer science to develop AI hardware that is more flexible and can emulate the functions of the human brain, including contextual interpretation, sensory applications, and autonomous adaptation.  See: What is neuromorphic computing? Everything you need to know about how it is changing the future of computing “We don’t really know how brains take signals from our body sensors and processes it, and make sense of the world around it. One of the reasons for that is we can’t simulate brains on regular computers – it’s just way too slow, even simulating like a cubic millimetre of the brain takes weeks to simulate for just a few seconds – and that’s stopping some of the understanding of how brains work,” ICNS director André van Schaik told ZDNet.
“Therefore, we need to build a machine that can emulate the brain rather than simulate with the difference being, it’s more of a hardware implementation where these things run faster and in parallel.” He added that to be able to understand the brain is just one of those “final frontiers in science”. “You can’t just study the human brain in humans at the right level of detail and scale … or do an EEG where you get brainwaves but get no resolution of what the individual neurons are doing in somebody’s brain, but with this system you can do that. Hopefully we can find out how brains work and then scale, but also how they fail,” van Schaik said. At the same time, van Schaik believes the solution could improve the way AI systems are built, describing current methods used to train AI models as “very brute force methods”.
“They’re really just learning from lots of examples … [but] learning about brains work very differently from what we call AI at the moment. Again, we don’t quite know how that works and again, holding us back is that we are unable to simulate this on current computers at any scale,” he said.
According to van Schaik, the team envisions the proof-of-concept setup would look much like current data centres. It would consist of three compute racks in a cool environment, incorporate Intel configurable network protocol accelerator (COPA)-enabled field-programmable gate arrays (FPGAs), and be connected by a high-performance computing (HPC) network fabric. The system would then be fed information, such as computational neuroscience, neuroanatomy, and neurophysiology. The system would be coming off the back of work Intel’s Neuromorphic Research Community (INRC) has been doing with its Loihi neuromorphic computing process.
Van Schaik said while the Loihi chip is very power efficient, it’s also less flexible as it’s a custom-designed chip and therefore non-configurable, compared to using FPGAs that can be configured and reconfigured using software. “We want to offer this more flexible system, and more power-hungry system as a separate pathway for that community,” he said. “We are currently able to simulate much larger networks than they can on that platform.” There’s also a sustainability aspect to the research as well, with van Schaik explaining that the system to be built would be able to process more data, with less power. The projected thermal design power of the system is 38.8 kW at full load.
“[In] the advent of AI and machine learning and smart devices … we’re collecting so much data … when that data goes to the cloud, it consumes electricity … and we’re actually on a trajectory … [where] data consume as much electricity as everything else in the world,” he said. “If we look at data centres at the moment that processes data … they consume massive amounts of electricity. The human bran is about 25 watts … we hope by building AI and data process more like brains, we can do it at much less power.”

Intel Labs searches for chip giant’s next act in quantum, neuromorphic advancesNow Google is using AI to design chips, far faster than human engineers can do the jobIBM shows quantum computers can solve these problems that classical computers find hardEthics of AI: Benefits and risks of artificial intelligence