Open Water is creating a portable and cheap MRI. fMRI can predict what words you are thinking of and what images your are thinking. There have been mistakes in published work showing cherry-picking and double counting of MRI data.
Below are some examples of work that has gone through rigorous peer review to check the math and statistical correlation processes. No cherry picking or double counting. Solid work that shows of inferences we can make to what the brain is doing and thinking by just looking at the use of oxygen by the brain voxel-by-voxel.
Enabling brain stimulation as well as recording
Openwater can focus infrared light down very finely, to sub-mm or even a few microns depending on the depth. Already 10 cm of depth can be shown with about 100 micron resolution or focusing power; this enables stimulation of certain areas using light itself. Benign near-infrared light. No probes, no needles, no cutting open a skull, no injections. While these numbers are more than enough for a variety of products, we are working on improving both the depth and focusing resolution and making rapid progress.
There are other research teams working on brain scanning and stimulation, but I saw no one working on a portable non-invasive approach. The field mostly divides into two directions:
a) the meditation-mindfulness teams using EEG, which has no real spatial resolution, and
b) the basic research groups working towards the estimated five or more Nobel prizes that it will take just to understand how neurons work. The latter group focuses on invasive approaches like opening up the skull and/or inserting chemicals and physical probes and needles directly into the brain. There is some fringe stuff, like those trying to do what Elon Musk popularized in a talk last year; I’ll call all of that “neural lace” essentially injectable super-small silicon chips that interweave through your blood and brain.
Non-invasive brain interfaces should be more popular than injections into the brain. Openwater invented a non-invasive suite of approaches that can leverage new manufacturing processes coming online in the world’s LCD factories for a ski-hat form-factor wearable at consumer electronics price-points as we hit volume production.
Background research papers on fMRI mind reading
One crucial test for any quantitative model of the brain is to show that the model can be used to accurately decode information from evoked brain activity. Several recent neuroimaging studies have decoded the structure or semantic content of static visual images from human brain activity. Here we present a decoding algorithm that makes it possible to decode detailed information about the object and action categories present in natural movies from human brain activity signals measured by functional MRI. Decoding is accomplished using a hierarchical logistic regression (HLR) model that is based on labels that were manually assigned from the WordNet semantic taxonomy. This model makes it possible to simultaneously decode information about both specific and general categories, while respecting the relationships between them. Our results show that we can decode the presence of many object and action categories from averaged blood-oxygen level-dependent (BOLD) responses with a high degree of accuracy (area under the ROC curve > 0.9). Furthermore, we used this framework to test whether semantic relationships defined in the WordNet taxonomy are represented the same way in the human brain. This analysis showed that hierarchical relationships between general categories and atypical examples, such as organism and plant, did not seem to be reflected in representations measured by BOLD fMRI.
Quantitative modeling of human brain activity can provide crucial insights about cortical representations, and can form the basis for brain decoding devices. Recent functional magnetic resonance imaging (fMRI) studies have modeled brain activity elicited by static visual patterns, and have shown that it is possible to reconstruct these images from brain activity measurements. However, blood oxygen level dependent (BOLD) signals measured using fMRI are very slow, so it has been difficult to model brain activity elicited by dynamic stimuli such as natural movies. Here we present a new motion-energy encoding model that largely overcome this limitation. Our motion-energy model describes fast visual information and slow hemodynamics by separate components. We recorded BOLD signals in occipito-temporal visual cortex of human subjects who passively watched natural movies, and fit the encoding model separately to individual voxels. Visualization of the fit models reveals how early visual areas represent moving stimuli. To demonstrate the power of our approach we also constructed a Bayesian decoder, by combining estimated encoding models with a sampled natural movie prior. The decoder provides remarkable reconstructions of natural movies, capturing the spatio-temporal structure of the viewed movie. These results demonstrate that dynamic brain activity measured under naturalistic conditions can be decoded using current fMRI technology.