Modelling the Human Brain Using Computers: The State of Play

Brain Map Thumb

Welcome to this series on the current state of affairs involved with modelling the human brain using traditional silicon-based computers. In this series, we’re going to look at current best attempts at neuromorphism – the modelling of neurobiological processes using analog and digital circuitry – the components involved, computer simulations and the role of quantum processing. It’s a wide, far-reaching and complex topic, but we’re going to establish the field step-by-step.

 1. Is it possible?

The basic theory behind neuromorphic computing is that the human brain performs series of logical operations on data. Just as the everyday electronics we use, like wireless printers, rely on a series of signals from its structural components to execute each function, the human brain does the same – i.e. that our understanding of logical computation is at least a close enough approximation of what the brain actually does to be able to model it in at least a functional manner… ”

That’s an important scientific point – we can only model, in science, and we don’t make any claims as to the ‘ontological truth’ of those things we are making assertions about. A successful ‘functionalist’ – that is, functionally congruent – model will be one that produces results under a variety of testing conditions that mirror the behaviour a human brain produces under similar conditions. The nature of those tests is yet to be determined, but the model will be increasingly ‘functionally accurate’ the more it models observed behaviours of the brain. So, in a way, we already have a working model of the brain in our everyday computers – just not particularly functionally accurate ones (one great test of artificial intelligence – the Turing Test (of which we’ll speak later) is yet to be passed by any machine).

2. What are our best options?

All simulation relies on hardware, but some simulation involves the deployment of numerous compatibility layers between that hardware and itself. That is, some simulated human brains are simulations that occur in software and others – such as neuromorphic modelling – in hardware.

Certain challenges – such as parallelism – are tricky to overcome using a software-based approach. Our brain is capable of highly parallel computation – many threads at any one time – of the kind beyond the range of even the most powerful modern supercomputer. We know already that computers possess single-thread computational faculties far in excess of the human brain, but it’s not a matter of speed, here. The way that the human brain processes information seems to demand some refinement in our understanding of computation.

The hardware-based approach has its own challenges. Standard componentry, such as that used in mobile phones or wireless printers does not necessarily cut the mustard. Neuromorphic modelling aims to simulate neuronal structures in the brain – the tiny processing units that enable our kind of parallel processing. However, without the ability to create these structures biologically (at least, not easily), there are essential aspects to neuron structure that defy silicon-based engineering. For example – the ability to learn and adjust, divert information flows and self-optimise (evolution). These are things that are being worked at, but there are certain engineering challenges in the way. The use of memristors to model biological componentry seems a promising step.

Next article, we’ll take a look at how some of our criteria for successful modelling of the human brain. That is, we will establish how we will know if we have successfully done the job. As we’ve intimated here, it’s not a binary matter (in more ways than one…!), but rather a scale approximating existing brain behaviour.


About the Author – This article has been submitted to us on behalf of Dell