Simulating the Human Brain: Neuromorphic Hardware Simulation

Neuromorphic Hardware Simulation

While the Blue Brain Project is making considerable progress towards accurate simulation of the human brain in a software layer, other research in to hardware simulation is making quite different, but remarkable, gains. This article will look at the state of neuromorphism in hardware engineering, and compare it with the neurocortical simulation that we looked at previously.

Neurowhat?

‘Neuromorphology’ is the study of nervous system structure. The term comes from the Greek portmanteau of ‘nerve’, ‘form’ and ‘study’. Neuromorphic engineering, then, is the process of engineering hardware with the form of neuronal structures. That is, the implementation of VLSI (very-large-scale integration) systems to model the complex structure of the brain. Each of the analog or digital components must closely model an individual neuron – in this way, their interaction can be made to form an accurate model of a functioning human brain.

Neurogrid

Our best shot at this exists in Stanford University’s recently-developed ‘Neurogrid’ – a device that mirrors neuronal activity using analog components (such as those found in any number of common printers). One key to the design of this device is the implementation of memristors – a very early technology that exploits a ‘pinched hysteresis’ effect to act as flexible memory arrays (memristors are increasingly employed in emerging technologies such as spintronic data storage). That is, memristors can store data without requiring a continuous power source. A critical part of neuronal activity is the ability to retain data with minimal power input – the entire human brain operates off voltages in the region of a single millivolt. So, the Neurogrid can simulate one million neurons, operating ten ‘neuron spikes’ a second – a frequency of 10Hz – while drawing less than a millionth of the Blue Brain Project’s power requirements.

Challenges

The tricky bit is simulating the action between neurons. When neurons communicate with one another, much of the cognition on information takes place in the dendrites and axons between neurons themselves – blending information from a billion different places in the brain in to a single, computed information stream. That’s tricky to model using analog circuitry, as we have no simple analog for that kind of computational ability. While the Blue Brain Project can – at significant computational expense – simulate this effect, the silicon-bound Neurogrid team will struggle to do so. At best, they may wind up attributing greater cognitive prowess to individual neurons, exploiting further their operational, evolutionary and learning behaviours. If they do that, they may wind up with something that does functionally model the human brain fairly accurately – though, evidently, with significant structural differences. It’s up to the tests under which these systems are to be subjected – the functionality-checking tests – to decide which is the more investor-worthy path. And with the Blue Brain Project competing for a decade-long $1 billion grant, there’s no small investing involved.

So what are these ‘tests’? And how have they evolved? That’s a question for those entities that have financed artificial intelligence research so far – organisations like DARPA (U.S. Defense Advanced Research Projects Agency). The kinds of tests they publicise are designed to stretch the capabilities of modern AI to the limit – until more formidable and exclusive tests (such as the Turing Test) are a surmountable barrier for our best neurological models.

Top