Desktop GPUs Simulate 24 Billion Synapse Mammal Brain Cortex

Dr James Knight and Prof Thomas Nowotny from the University of Sussex’s School of Engineering and Informatics used the latest Graphical Processing Units (GPUs) to give a single desktop PC the capacity to simulate brain models of almost unlimited size. This work will make large brain simulations accessible to researchers with tiny budgets.

The research builds on the pioneering work of US researcher Eugene Izhikevich who pioneered a similar method for large-scale brain simulation in 2006.

The researchers applied Izhikevich’s technique to a modern GPU, with approximately 2,000 times the computing power available 15 years ago, to create a cutting-edge model of a Macaque’s visual cortex (with 4.1 million neurons and 24.2 billion synapses) which previously could only be simulated on a supercomputer.

The researchers’ GPU accelerated spiking neural network simulator uses the large amount of computational power available on a GPU to ‘procedurally’ generate connectivity and synaptic weights ‘on the go’ as spikes are triggered – removing the need to store connectivity data in memory.

Initialization of the researchers’ model took six minutes and simulation of each biological second took 7.7 min in the ground state and 8.4 min in the resting state– up to 35 % less time than a previous supercomputer simulation. In 2018, one rack of an IBM Blue Gene/Q supercomputer initialization of the model took around five minutes and simulating one second of biological time took approximately 12 minutes.

Prof Nowotny, Professor of Informatics at the University of Sussex, said: “Large-scale simulations of spiking neural network models are an important tool for improving our understanding of the dynamics and ultimately the function of brains. However, even small mammals such as mice have on the order of 1 trillion synaptic connections meaning that simulations require several terabytes of data – an unrealistic memory requirement for a single desktop machine.

Nature Computational Science – Larger GPU-accelerated brain simulations with procedural connectivity

Simulations are an important tool for investigating brain function but large models are needed to faithfully reproduce the statistics and dynamics of brain activity. Simulating large spiking neural network models has, until now, needed so much memory for storing synaptic connections that it required high performance computer systems. Here, we present an alternative simulation method we call ‘procedural connectivity’ where connectivity and synaptic weights are generated ‘on the fly’ instead of stored and retrieved from memory. This method is particularly well suited for use on graphical processing units (GPUs)—which are a common fixture in many workstations. Using procedural connectivity and an additional GPU code generation optimization, we can simulate a recent model of the macaque visual cortex

SOURCES- University of Sussex, Nature Computational Science
Written by Brian Wang,

34 thoughts on “Desktop GPUs Simulate 24 Billion Synapse Mammal Brain Cortex”

  1. Not exactly. Performance may not scale linearly with resources. It's a safe bet, actually, that the workload isn't perfectly parallelizable. Even if you had enough GPUs to create a cluster the size of a human brain, you would probably not reach realtime performance, and that's assuming things don't bog down severely due to the bandwidth limitations for the interconnects.

    And that's just a brain. The body has a ton of neurons as well. All your senses feed the brain with contextual information. Then you have nontraditional senses like proprioception. The truth is, so much of our experience of the world is fundamentally tied to our senses, that we have no real idea what it would be like to simulate consciousness on a chip (or many). It's entirely possible we'd just end up with something like a deep coma or vegetative state.

  2. Which still means each individual GPU card would need to be about 1000x more powerful than now. 500,000x to get real-time.

  3. A top end Radeon GPU has an on card memory bandwidth of around 500 gigabytes per second (real world, not peak). The PCIe 4.0 bus maxes out at 64 gigabytes per second. How exactly are you going to network these GPUs between disparate machines?

  4. Hah! (Ack! God-like powers!) Heinlein's "The Moon Is a Harsh Mistress" has a "conscious" computer with a sense of humor. Totally amazing book for 1966.

  5. Yeah, that's why I wrote below, "Sounds stupid." I should have been clearer, but a living mouse has more than 23 billion synapses, and it's not even close to a human.

  6. I'd agree. Especially as I recall my daughters playing the Sims back in the day. Sweet girls then, and real Samaritans now (much more than me), but the glee when they removed the ladders from a swimming pool so the swimmers would drown (or the doors from a room with a sim and a stove so he would eventually catch on fire while cooking and be unable to escape), just so they could grow the size of their sim graveyard, was a bit unsettling.

  7. I don't understand this. A neural network learns by adjusting synapse strength. How do you do that without storing the strength of each synapse?

  8. You assume that the amount of time to simulate a second of brain activity will not increase as you increase the number of neurons & synapses. It may be that as the number of possible connections increase (probably exponentially) then so to does the amount of time needed to simulate each second.

  9. I don't think the aim is to produce an intelligence or AI with this. While knowing how a brain works, (assuming nothing supernatural in consciousness), may help with that, that is not the main goal here. The main goal is to understand how a brain works and for that you do need to model the brain.
    Even if you understand the principle of lift via air-pressure differential, a model of a birds wing will help to understand how that particular implementation of that principle achieves its outcome vs another implementation, eg stiff delta wings of an airplane or the wings of a hawk vs bumblebee etc.

  10. Dennard scaling was still a thing 15 years ago (well, more than today anyway), and GPUs were still immature and Moore's law was going full tilt. The last 7 years saw an improvement in GPU performance of a factor ~3x at the same power at a ~3x cost increase. Extrapolating that in a straight line gets you to ~9x the performance at ~9x the cost in 15 years. Pure compute may have performed a bit better, and a purpose built accelerator maybe gets you 10x for a specific task.

    There's no way the next 15 years will be a factor 480 improvement in GPUs; more like a factor 2-10. A die shrink is not a guarantee that you can fit more transistors, at a lower cost or consume less power anymore. If you make the transistors smaller and find that you need to space them further appart and clock them slower then that will be it and CMOS is dead. TSMC has some things in the pipe that they call "3 nm" and "5 nm" respectively, neither of which is representative of the size of the transistors or their performance under the old Dennard scaling regime. 5 nm is not yet a commercial success, but exists and is 30% less power at the same clock speed. 3 nm is expected to be similar and require GAAFET just to get those 30%. Then there is a big question mark if there is anything more to be had. Not even 2x scaling is certain.

    The next big leg up in computing performance will be something that replaces silicon CMOS. That will take at least 20 years to scal

  11. I don't think that invalidates what I am saying.
    And not terribly relevant, but we can "fly" without wings: jet-pack, hot air ballon, blemp, airship, helicopter, hovercraft, rocket, circus "cannon", flying bedstead, ski jump, gyrocopter, powered paraglider, jet gyrodyne/rotodyne, rocket-pack, trebuchet, teeterboard, water blob, flyboard…
    Even if you eliminate the human as projectile ones, there are still quite a few ways to get off the ground. Not the greatest example of "wingless" as it does have some pathetic wings, but one of my favorites:

  12. Aeroplanes turned out not to need, or want, flapping wings. But they did turn out to need wings.
    Aeroplanes turned out not to need feathers, but they do generally want tails.
    Aeroplanes turned out not to need beaks, but they did end up with the sort of sleek, curvaceous look of many birds that is otherwise only found in fish.
    They don't have claws, but they do have legs that sort of tuck up and hide while they are flying.

    The point being that from the point of view of say Leonardo DaVinci, it was by no means obvious which features of birds would be a good idea for a flying machine and which would be silly.

  13. Compute isn't the problem, it's the low hanging fruit that needs to be picked before the long hard work can begin.
    The problem is the near total lack of proper understanding on how the brain functions

  14. "the order of 1 trillion synaptic connections meaning that simulations require several terabytes of data – an unrealistic memory requirement for a single desktop machine."

    In my lab, I have a couple of standard Dell servers based on the Zen 2 AMD architecture. They have 4 TB of DDR4 RAM each. I can fit a couple of GPUs in there too if needed. Bandwidth to GPUs with PCIe gen 4 is a decent 31.5 GB/s. Will be twice very soon and 4 times that shortly after.
    So it looks like they used modest hardware in the experiment.

  15. Simulating the brain does not mean producing intelligent beings. In spite of tools like this we really have a poor understanding of how brains work and what intelligence actually is.

  16. The idea that it is best to have something that is a closer match to the thing being simulated is intuitive, but not grounded in experience or logic.
    If airplanes had to flap their wings, would we ever go Mach 1? If cars had legs, the maintenance costs and energy required to run them would be astronomical. And 65 mph on the highway would probably be out.

  17. The new approach is the big story here, not the hardware exactly. Using a "GPU to ‘procedurally’ generate connectivity and synaptic weights ‘on the go’ as spikes are triggered – removing the need to store connectivity data in memory." Though GPUs are also superheros in this story.
    The other important thing here is that it is likely you could use the thousands of GPUs in a current supercomputer to scale this up and simulate full brains, probably even human brains.

  18. So eventually someone will be able to create a person in their basement.

    At which point we have to consider if it has rights under the law, the same as our biological children. The answer may be denied a few times but it only has to come back as "yes" one time. At which point the government will feel constrained to regulate and control the process, but will find this at least as difficult as enforcing prohibition, at best, especially as technology continues forward.

    Then a few decades later they start finding humans who have created virtual worlds on their private computers, complete with billions of intelligent beings. Never mind that they could have gotten similar results in more ethical ways, if they can do it, some people will.

    What then? Enfranchise them? Double the population of intelligent voting entities on the planet in a single day? It's going to be ugly, or it's going to involve solutions we haven't even thought of yet.

  19. GPUs are cheaper and have a mature industrial pipeline upgrading them every year, but yes, we'll need to make the switch at some point. Seems like we need an early killer app to drive it.

    Wasn't there some talk of building dedicated deep learning cards?

  20. …a modern GPU, with approximately 2,000 times the computing power available 15 years ago…

    …each biological second took 7.7 min in the ground state and 8.4 min in the resting state…

    60 sec * 8 min = 480 sec

    So between improvements in hardware and this architectural change, the last 15 years have led to a 2000 fold improvement, and we are now a factor of 480 away from doing this in real time. You'd have to be rather pessimistic to think this won't happen in the next 15 years.

    Of course, that's only a portion of a small monkey's brain, so a full simulated mind would still require a supercomputer. But perhaps a relatively small one.

  21. The conclusion I extract, is that today one can have on the desktop the power of a supercomputer 15 years old with a tiny fraction of the cost and with a tiny fraction of the energy needed in the past.

  22. This implies a supercomputer, which is easily more than 10.000x as powerful as a home PC, and can be GPU based, with this software, can now simulate entire brains in real time. That is an even far more spectacular story to follow up on.

  23.  the capacity to simulate brain models of almost unlimited size

    Where almost unlimited size = 2.4% the size of a mouses brain.

    And so far they can simulate it being unconscious.

    So I'm interpreting this as:

    Good stuff. Valuable work that needs to be done, but they are still only at the beginning of doing the foundational development work required to start planning an actual working brain simulation that the opening paragraphs imply they've already achieved.


Leave a Comment