Desktop GPUs Simulate 24 Billion Synapse Mammal Brain Cortex

Dr James Knight and Prof Thomas Nowotny from the University of Sussex’s School of Engineering and Informatics used the latest Graphical Processing Units (GPUs) to give a single desktop PC the capacity to simulate brain models of almost unlimited size. This work will make large brain simulations accessible to researchers with tiny budgets.

The research builds on the pioneering work of US researcher Eugene Izhikevich who pioneered a similar method for large-scale brain simulation in 2006.

The researchers applied Izhikevich’s technique to a modern GPU, with approximately 2,000 times the computing power available 15 years ago, to create a cutting-edge model of a Macaque’s visual cortex (with 4.1 million neurons and 24.2 billion synapses) which previously could only be simulated on a supercomputer.

The researchers’ GPU accelerated spiking neural network simulator uses the large amount of computational power available on a GPU to ‘procedurally’ generate connectivity and synaptic weights ‘on the go’ as spikes are triggered – removing the need to store connectivity data in memory.

Initialization of the researchers’ model took six minutes and simulation of each biological second took 7.7 min in the ground state and 8.4 min in the resting state– up to 35 % less time than a previous supercomputer simulation. In 2018, one rack of an IBM Blue Gene/Q supercomputer initialization of the model took around five minutes and simulating one second of biological time took approximately 12 minutes.

Prof Nowotny, Professor of Informatics at the University of Sussex, said: “Large-scale simulations of spiking neural network models are an important tool for improving our understanding of the dynamics and ultimately the function of brains. However, even small mammals such as mice have on the order of 1 trillion synaptic connections meaning that simulations require several terabytes of data – an unrealistic memory requirement for a single desktop machine.

Nature Computational Science – Larger GPU-accelerated brain simulations with procedural connectivity

Abstract
Simulations are an important tool for investigating brain function but large models are needed to faithfully reproduce the statistics and dynamics of brain activity. Simulating large spiking neural network models has, until now, needed so much memory for storing synaptic connections that it required high performance computer systems. Here, we present an alternative simulation method we call ‘procedural connectivity’ where connectivity and synaptic weights are generated ‘on the fly’ instead of stored and retrieved from memory. This method is particularly well suited for use on graphical processing units (GPUs)—which are a common fixture in many workstations. Using procedural connectivity and an additional GPU code generation optimization, we can simulate a recent model of the macaque visual cortex

SOURCES- University of Sussex, Nature Computational Science
Written by Brian Wang, Nextbigfuture.com

Subscribe on Google News