A novel implant seeded with muscle cells could better integrate prosthetic limbs with the body, allowing amputees greater control over robotic appendages. The construct, developed at the University of Michigan, consists of tiny cups, made from an electrically conductive polymer, that fit on nerve endings and attract the severed nerves. Electrical signals coming from the nerve can then be translated and used to move the limb.
Living interface: Muscle cells (shown here) are grown on a biological scaffold. Severed nerves remaining from the lost limb connect to the muscle cells in the interface, which transmits electrical signals that can be used to control the artificial arm. Credit: Paul Cederna
“This looks like it could be an elegant way to control a prosthetic with fine movement,” says Rutledge Ellis-Behnke, a scientist at MIT who was not involved in the research. “Rather than having a big dumb piece of plastic strapped to the arm, you could actually have an integrated tool that feels like it’s part of the body.”
The most successful method for controlling a prosthesis to date is a surgical procedure in which nerves that were previously attached to muscles in a lost arm and hand are transplanted into the chest. When the wearer thinks about moving the hand, chest muscles contract, and those signals are used to control the limb. While a vast improvement over existing methods, this approach still provides a limited level of control–only about five nerves can be transplanted to the chest.
The new interface, developed by plastic surgeon Paul Cederna and colleagues, builds on this concept, using transplanted muscle cells as targets rather than intact muscle. After a limb is severed, the nerves that originally attached to it continue to sprout, searching for a new muscle with which to connect. (This biological process can sometimes create painful tangles of nerve tissue, called neuromas, at the tip of the severed limb.) “The nerve is constantly sending signals downstream to tell the hand what to do, even if the hand isn’t there,” says Cederna. “We can interpret those signals and use them to run a prosthesis.”
The interface consists of a small cuplike structure about one-tenth of a millimeter in diameter that is surgically implanted at the end of the nerve, relaying both motor and sensory signals from the nerve to the prosthesis. Inside the cup is a scaffold of biological tissue seeded with muscle cells–because motor and sensory nerves make connections onto muscle in healthy tissue, the muscle cells provide a natural target for wandering nerve endings. The severed nerve grows into the cup and connects to the cells, transmitting electrical signals from the brain. Because it is coated with an electrically active polymer, the cup acts as a wire to pick up electrical signals and transmit them to a robotic limb. Cederna’s team doesn’t develop prostheses themselves, but he says the signals could be transmitted via existing wireless technology.
Brain-Computer Interfacing (BCI) can be used for capturing brain signals and translating them into commands that allow humans to control (just by thinking) devices such as computers, robots, rehabilitation technology and virtual reality environments.
This experiment goes a step further and was conducted by Dr Christopher James from the University’s Institute of Sound and Vibration Research. The aim was to expand the current limits of this technology and show that brain-to-brain (B2B) communication is possible.
His experiment had one person using BCI to transmit thoughts, translated as a series of binary digits, over the internet to another person whose computer receives the digits and transmits them to the second user’s brain through flashing an LED lamp.
While attached to an EEG amplifier, the first person would generate and transmit a series of binary digits, imagining moving their left arm for zero and their right arm for one. The second person was also attached to an EEG amplifier and their PC would pick up the stream of binary digits and flash an LED lamp at two different frequencies, one for zero and the other one for one. The pattern of the flashing LEDS is too subtle to be picked by the second person, but it is picked up by electrodes measuring the visual cortex of the recipient.
The encoded information is then extracted from the brain activity of the second user and the PC can decipher whether a zero or a one was transmitted. This shows true brain-to-brain activity.