Brian Litt and Brain-Computer Interfaces: Past, Present and Future
Brain-computer interfaces (aka Brain-Machine Interfaces or Neuroprosthetics), long of interest to science fiction writers and creative thinkers, became a government funded research discipline in the United States beginning in the 1970s. The vision of its architects at DARPA and the National Science Foundation was to restore motor control to soldiers with brain, spinal cord and limb injuries, programs that continue to flourish today. Early devices sampled a variety of neural signals, including scalp EEG and evoked potentials, though the first dramatic successes arose ~ 20 years later from more modern technologies that allowed completely paralyzed (or “locked in”) patients to operate computers or move robotic arms using nothing but their thoughts. These systems record multi-unit neuronal activity from small, targeted brain regions, compute transfer functions to transduce this activity into movement control signals, and conduct it to “effectors,” such as computer cursors or robotic limbs. What has followed is an explosion of innovation in hardware (materials, batteries, computation speed and miniaturization), software (e.g. machine learning), and systems neuroscience that is producing a growing array of implantable neural recording and activation devices to treat disease, restore and potentially augment human function.
BCIs are now universally accepted in a variety of forms. Brain stimulation devices for movement disorders and pain are implanted in patients on almost every continent. New successes, such as recent reports of treating depression with brain stimulation, are world news. Auditory prostheses such as cochlear implants are now commonplace, visual prostheses have reached early milestones to restore low resolution sight, and haptics research holds promise to restore sensation in the setting of limb loss, brain and peripheral nerve injury. Early areas of emphasis, such as prosthetic limb research, have made the most progress, using both real-time feedback to improve responsiveness of artificial arms and legs, and transplanted peripheral nerves to drive sensors. BCIs for speech work slowly but they function enough to be gaining users, and those for cognition, particularly for memory, are being tested in early forms, with great promise. Underlying all of these implementations are an understanding that “neuroplasticity,” the brain’s ability to adapt and interpret regular and logical signals when taught, can take even low levels of information and interpret it logically. This is the case, for example, in cochlear implants where patients can learn to interpret crude electrical stimulations through a handful of macroelectrodes as intelligible speech.
The major hurdles to better BCIs are both technical and rooted in neuroscience. Materials science researcher must deliver more durable and better-tolerated implantable materials to prevent failure and rejection. Engineers must craft smaller, higher resolution devices with more contacts, higher density but that can also cover larger regions, to be able to record and activate the large neuronal networks involved in brain functions. Better machine learning techniques to extract pertinent information from neural signals without relying on human experts to identify them are required. Finally, ways of dramatically increasing information transfer rates, and to optimize neuroplasticity are required to get fast enough bandwidth from humans to devices to make their speed useful. Challenges on the neuroscience side are equally important, most crucially determining on what scale to record neural activity (e.g. single neurons, cortical columns, broad brain regions etc.), how much activity, and over how large a region. We also need better techniques to map the diverse regions in the brain that work together in cognition and other functions, both invasively and non-invasively in humans, in order to unlock how they work.
The future of BCI research is extremely bright. The scientific community worldwide is making rapid progress in each of the above challenge areas, as demonstrated by the number of devices being invented, tested, deployed for human use, and the dramatically increasing research literature in the area of BCI. Most crucially, the rate of information transfer from human brain to computers is rapidly increasing, though in part by using more invasive technologies. Taking the step from repairing damage and restoring function to augmenting our abilities to see, hear, move or think is a dramatic one, and one with major ethical and moral implications. Devices to restore and enhance memory are already being tested, and our growing understanding of how memories are encoded and retrieved give dim glimpses of how information might be transferred from computer storage to human consciousness, though this type of application seems far off now. Augmentation of strength, perhaps reducible to mechanical design once appropriate control is established, seems much less challenging by comparison. What seems most clear is that the pace of advancement in these areas is accelerating. That BCI research will eventually transition from plasticity and repair to augmentation is not in doubt. It is imperative that we think carefully about how and where, scientifically, this shift should take place, and how we might best guide this process.
One vs two way (open or closed loop)
invasive (non,partial, very) bandwidth
spatial scale (topology, degrees of freedom)
temporal scale (precision)
treat disease (parkinsons, epilepsy...)>
hearing, vision, fate, artificial limbs
Memory, speed, attention, perception, processing
Levels of Organization
Neuron, cortical columns, nuclei, functional networks, cortical regions
Map the network
choose the connection site
inject a signal (MUST contain information)
neuroplasticity interprets over time
Performance, - f (information quality, accessibility, bandwidth...)
Cochlear Ear Implants
Need to know the hearing pathways of the brain
Alan alda Science show had examples of the sound from 1, 4, 16 and 22 channels for cochlear implant.
Data rates : 25 bits/min in 2000 and about 50 bits/min now.
higher rates with invasive implanted arrays
The latest work is to use silicon nanostrips (the tops of computer chips are sliced off) placed on flexible substrates to get to 20 micron devices. Increasing the resolution of the interfaces. 720 channels on the brain.
Also, new silk electronics that is 2.5 microns thick and can improve the signal strength.
The higher resolution shows that epilepsy is effecting a cloud of cortical columns and not the entire golf ball sized area of the brain.
We can get to detailed mamory map of memory networks
we could create learning during sleep. Dreams currently have playback of memories at 6-8 times faster speed.
We can enhance learning and brain plasticity.
We could eventually archive intelligence and personality
If you liked this article, please give it a quick review on Reddit, or StumbleUpon. Thanks
How to Make Money