Singularity/AI related: responses of neurons of living animal watched

To this advance: I say holy crap. A very creative piece of work. It is and will massively and rapidly increase detailed understanding of brain functions.

Thanks to a new imaging system, researchers at MIT’s Picower Institute for Learning and Memory have gotten an unprecedented look into how genes shape the brain in response to the environment. This is the first study that demonstrates the ability to directly visualize the molecular activity of individual neurons in the brain of live animals at a single-cell resolution, and to observe the changes in the activity in the same neurons in response to the changes of the environment on a daily basis for a week.

This advance, coupled with other brain disease models, could “offer unparalleled advantages in understanding pathological processes in real time, leading to potential new drugs and treatments for a host of neurological diseases and mental disorders,” said Nobel laureate Susumu Tonegawa, a co-author of the study.

Tonegawa, director of the Picower Institute and the Picower Professor of Biology and Neuroscience at MIT, Wang and colleagues found that visual experience induces a protein that works as a molecular “filter” to enhance the overall selectivity of the brain’s responses to visual stimuli.

The protein, called “Arc,” was previously detected in the hippocampus, where it is believed to help store lasting memories by strengthening synapses, the connections between neurons. The Picower Institute’s unexpected finding was that Arc also blocks the activity of neurons with low orientation selectivity that are not well “tuned” to vertical and horizontal lines, while keeping neurons with high orientation selectivity.

To come up with a better way to investigate this, the MIT team developed a state-of-the-art imaging system in which transparent cranial windows were implanted over the primary visual cortex, allowing the researchers to monitor over time the expression of proteins in the brains of live mice.

The study exploited the power of two-photon microscopy (so-called because it uses two infrared photons to emit fluorescence in tissue), which allows imaging of living tissue up to 1 millimeter deep, enough for researchers to see proteins expressed within individual neurons within the brain.

They then created a mouse model in which a coding portion of the Arc gene was replaced with a jellyfish gene encoding a green fluorescent protein (GFP). Neural activities that normally activate the Arc gene then activated the GFP, leaving a fluorescent trace detectable by two-photon microscopy. This allowed the researchers to image neuronal activation patterns induced by visual experience, thus uncovering the Arc protein’s role in orchestrating neurons’ reactions to natural sensory stimuli.

The genetically engineered mice were let loose in an environment containing a cylinder covered with stripes of vertical or horizontal lines, and the proteins in their brains were monitored as the mice saw the cylinders daily.

Comments are closed.