1. DNA nanobots with metal nanoparticles (this already can release drugs in living things based on EEG monitored conditions)
2. Neural dust for ultra-precise sensor readings
3. Deep learning for identifying patterns in sensors
In August, Nextbigfuture reported that Ido Bachelet's team used a series of steps to enable human thoughts to control DNA nanobots inside cockroaches.
The team recorded EEG patterns which were recognized online by an algorithm
This in turn controlled the state of an electromagnetic field.
The field induces the local heating of billions of mechanically-actuating DNA origami robots tethered to metal nanoparticles, leading to their reversible activation and subsequent exposure of a bioactive payload.
Future techniques that build upon this prototype could be helpful for schizophrenia, depression or other mental disorders, in that the drugs only activate when a patient’s brain waves show signs of abnormality.
DNA nanobots are controlled by two locks, each one a special strand of DNA called an aptamer that binds to a target molecule—a receptor on the surface of cancer cells, for example. When the aptamer locks onto the target, the clamshell unzips, delivering the payload.
“These nanorobots are the first system that approaches real arbitrary control of therapeutic molecules,” wrote the Israeli team in the paper. But they require us to find specific molecular targets — present in diseased but not normal states — for the robot to bind to. That’s already hard for cancer. For mental disorders, it’s nearly impossible.
The algorithm can be trained to track brain states that underlie ADHD or schizophrenia or otherwise be modified to suit your needs, explains study author Sachar Arnon to New Scientist. For example, if EEG detects signs of a burgeoning depressive episode, it could trigger DNA robots to expose anti-depressants briefly to counteract symptoms before they become full-blown. This way, the brain isn’t perpetually bathed in mind-altering drugs even when they’re not needed.
For the system to work as planned, the team envisions a hearing-aid-like EEG device that continuously and inconspicuously monitors brain activity. When abnormalities occur, it triggers a wearable — for example, a smartwatch, glasses or jewelry — to create the electromagnetic fields required to expose the drug.
Wireless Neural dust down to millimeter cubes and can go to 50 micron cubes for ultra precise mind computer interfaces and neural monitoring
Berkeley neural dust researchers have already shrunk them to a 1 millimeter cube – about the size of a large grain of sand – contain a piezoelectric crystal that converts ultrasound vibrations from outside the body into electricity to power a tiny, on-board transistor that is in contact with a nerve or muscle fiber. A voltage spike in the fiber alters the circuit and the vibration of the crystal, which changes the echo detected by the ultrasound receiver, typically the same device that generates the vibrations. The slight change, called backscatter, allows them to determine the voltage.
A major hurdle in brain-machine interfaces (BMI) is the lack of an implantable neural interface system that remains viable for a substantial fraction of a primate lifetime. Recently, sub-mm implantable, wireless electromagnetic (EM) neural interfaces have been demonstrated in an effort to extend system longevity. However, EM systems do not scale down in size well due to the severe inefficiency of coupling radio waves at mm and sub-mm scales.
They propose an alternative wireless power and data telemetry scheme using distributed, ultrasonic backscattering systems to record high frequency (~kHz) neural activity. Such systems will require two fundamental technology innovations:
1) thousands of 10 – 100 um scale, free-floating, independent sensor nodes, or neural dust, that detect and report local extracellular electrophysiological data via ultrasonic backscattering, and
2) a sub-cranial ultrasonic interrogator that establishes power and communication links with the neural dust. We performed the first in vitro experiments which verified that the predicted scaling effects follow theory and that the extreme efficiency of ultrasonic transmission can enable the scaling of the sensing nodes down to 10's of um.
Deep Learning better at humans for identifying images
Error rates for trained humans is 5% and now deep learning is at 3% for image and speech tasks.
SOURCES- New Scientist, PLOS One, Nervana, University of Berkeley