MIT Technology Review reports that scientists led by Eduardo Iáñez of Miguel Hernandez University have for the first time combined a number of desirable features into a single brain-computer interface that is noninvasive, spontaneous and asynchronous. They use four different models, each with assumptions that are sometimes the opposite others. This way, however a subject's brain happens to be wired up, all the computer has to figure out is whether they mean "left" or "right" in order to direct a robot arm in two dimensions.
Future research goals include moving this interface out of two dimensions and into three. If they succeed, they'll have at least matched in humans an experiment performed with Macaques in which an EEG-driven arm was used by the monkeys to feed themselves. That would be quite a feat for patients who are currently unable to engage in such activities, and the main barrier appears to be how clever computers can be about processing the signal. In other words, the sophistication of their algorithm.
If you liked this article, please give it a quick review on ycombinator, or Reddit, or StumbleUpon. Thanks
How to Make Money