Facebook Will Make Wearable Mind-Reader to Read Your Inner Voice

Seeing infrared
Like other cells in your body, neurons consume oxygen when they’re active. So if we can detect shifts in oxygen levels within the brain, we can indirectly measure brain activity. Think of a pulse oximeter — the clip-like sensor with a glowing red light you’ve probably had attached to your index finger at the doctor’s office. Just as it’s able to measure the oxygen saturation level of your blood through your finger, we can also use near-infrared light to measure blood oxygenation in the brain from outside of the body in a safe, non-invasive way. This is similar to the signals measured today in functional magnetic resonance imaging (fMRI) — but using a portable, wearable device made from consumer-grade parts.

Facebook funded University of California San Francisco scientists. UCSF researchers asked patients to answer out loud a list of simple multiple-choice questions ordered randomly.

Decoding what patients with speech impairments are trying to say is improved by taking into account the full context in which they are trying to communicate.

Nature Communications – Real-time decoding of question-and-answer speech dialogue using human cortical activity

The lateral surface of the human cortex contains neural populations that encode key representations of both perceived and produced speech Recent investigations of the underlying mechanisms of these speech representations have shown that acoustic and phonemic speech content can be decoded directly from neural activity in superior temporal gyrus (STG) and surrounding secondary auditory regions during listening. Similarly, activity in ventral sensorimotor cortex (vSMC) can be used to decode characteristics of produced speech based primarily on kinematic representations of the supralaryngeal articulators and the larynx for voicing and pitch17. A major challenge for these approaches is achieving high single-trial accuracy rates, which is essential for a clinically relevant implementation to aid individuals who are unable to communicate due to injury or neurodegenerative disorders.

Recently, speech decoding paradigms have been implemented in real-time applications, including the ability to map speech-evoked sensorimotor activations, generate neural encoding models of perceived phonemes, decode produced isolated phonemes, detect voice activity, and classify perceived sentences. These demonstrations are important steps toward the development of a functional neuroprosthesis for communication that decodes speech directly from recorded neural signals. However, to the best of our knowledge there have not been attempts to decode both perceived and produced speech from human participants in a real-time setting that resembles natural communication. Multimodal decoding of natural speech may have important practical implications for individuals who are unable to communicate due to stroke, neurodegenerative disease, or other causes. Despite advances in the development of assistive communication interfaces that restore some communicative capabilities to impaired patients via non-invasive scalp electroencephalography, invasive microelectrode recordings, electrocorticography (ECoG), and eye tracking methodologies, to date there is no speech prosthetic system that allows users to have interactions on the rapid timescale of human conversation.

Abstract
Natural communication often occurs in dialogue, differentially engaging auditory and sensorimotor brain regions during listening and speaking. However, previous attempts to decode speech directly from the human brain typically consider listening or speaking tasks in isolation. Here, human participants listened to questions and responded aloud with answers while we used high-density electrocorticography (ECoG) recordings to detect when they heard or said an utterance and to then decode the utterance’s identity. Because certain answers were only plausible responses to certain questions, we could dynamically update the prior probabilities of each answer using the decoded question likelihoods as context. We decode produced and perceived utterances with accuracy rates as high as 61% and 76%, respectively (chance is 7% and 20%). Contextual integration of decoded question likelihoods significantly improves answer decoding. These results demonstrate real-time decoding of speech in an interactive, conversational setting, which has important implications for patients who are unable to communicate.

SOURCES- Facebook, Nature Communications
Written By Brian Wang, Nextbigfuture.com

23 thoughts on “Facebook Will Make Wearable Mind-Reader to Read Your Inner Voice”

  1. I do agree with the point about no noticeable intervention.
    Given that terrorist attacks and major crimes still occur, either

    • The government decides to let them occur, maybe to keep their monitoring secret, in which case what are they actually thinking to prevent?
    • The government does actually stop 95% of them, only the ones that accidentally slip through occur. In which case why haven’t such things fallen by a factor of 20 in the past 20 years?
    • Criminals and terrorists are aware of all this stuff and maintain a level of operational security that is far, far in advance of anything else they seem capable of. Also, why wouldn’t such anti-government types spill the beans?
    • Big Brother is an idiot.

    Possibilities are not necessarily mutually exclusive.

  2. I’d be suspicious for sure, but at least with the location stuff I know that if I turn off either the GPS radio or location services, all the apps that need location data stop working. So either it’s doing what it says it does, or all those app makers are playing along a bit too well.

    For location history, if I turn it off, I can’t see that data anymore. But I suppose there could still be a copy somewhere else.

    Anyway, as I wrote elsewhere, as long as there’s no noticeable intervention with most people’s lives, whatever monitoring there is or isn’t doesn’t make much of a difference. If there’s enough intervention to be noticeable, then people will know they’re being monitored. It would no longer be just worried/suspicious.

  3. Zuck must have found some of Elon Musk’s old neuralink notes in the trash and reverse engineered on them!

  4. If you were actually worried about the government secretly tracking you, would you believe that selecting “OFF” on location tracking actually turned it off?

  5. Um, yes, I do – I can turn off the GPS radio, or I can disable the location services in the settings. Or just the location history, if I want to keep the real-time stuff.

  6. Do you have a button to turn off the position reporting functions on your mobile phone?

    Well you can turn the whole thing off, but then you are without a phone, which may be unacceptable to your boss (in work hours) your spouse, and your daughter waiting to be picked up from an early finish at school.

  7. All it takes is a button to toggle the power and internet supply of your BCI (push-to-talk, like a walkie-talkie). A more advanced version can look for neural patterns that indicate intent, but there’s no need to make the interface any more difficult than it has to be.

  8. I can see a future in which it IS possible to control your mind reader output, but it takes training and skill.

    Much the same as the current situation where someone who has gone to a university, especially an upper level one, has had years of training in how to speak and write without breaking any of the current taboos of political correctness. The savage attacks on anyone who says something “unforgivable” largely serves to weed out people with lower class mannerisms and who isn’t good enough at reading social cues to know whether they really are “off the record” or not.

    Hint: It doesn’t correlate with whether someone says you are off the record. As that guy who was too open on the “internal, off the record, discussion space” at Google found out to his cost.

    The net effect is to harden the current social strata. If safety from mental thoughtcrime is only limited to those who had parents who could afford them a “subvocalisation tutor” when they were a teenager, then you won’t be in any danger of having any lower class oiks stealing your job just because they happen to work harder and have more talent than you do.

    And George Orwell directly addressed this. He wrote about double-think, where the surface thoughts had to be approved ones, even if deeper down you think something else.

  9. The big question is who has control over which thoughts leave the confines of your mind. If you retain the control over that, then it’s no different from speaking or typing. It would just be using a different translation path.

  10. My thought process, not my (external) speech. I can monitor it by paying attention to what’s going on inside my mind.

    I’m fairly sure I’m not imagining, because I can stop myself before the 2nd stage, or in the middle of it, and still retain the concept of the full sentence, with my “inner voice” saying only the first few words of it, or none at all. Or I can let the 2nd stage complete, and hear my “inner voice” saying the full sentence.

    (To be clear, I don’t feel the location where these two stages are happening. I can only tell that it’s two separate stages, and I have some control over the 2nd stage.)

  11. @Michael K
    How can you monitor yor language process with your brain? How do you know that you are not just imagening the two stages of your speech?

  12. I honestly don’t think that Zuck understands that to use such a device is to abuse it as though there is no difference between knowing what I ate for breakfast (because I posted about it) compared to knowing my inner thoughts about who would win in a MMA throwdown between AOC and Dan Crenshaw (for charity and with rules of course, we aren’t savages!).

    Zuck-bot v0.7 strikes me as a Replicant that was wiped and incorrectly re-imaged. He seems to be in a state of perpetual surprise that his wonderful toys could be misused and seems to be missing certain ethical subroutines.

  13. so they ask a question and then determine if the person it thinking a b c or d? doesn’t really around too advanced…

  14. Yes, but so can voice processing, if the NLP AI is trained well enough and you’re not using difficult words (though I guess neural decoding would have the same issues).

  15. Does Zuckerberg really think that there are enough people who are willing to let him read their minds, for this to be a viable product?

    I mean, Musk makes me nervous, but Zuckerberg you have no questions about, you know in advance he’ll abuse it.

  16. I make it a rule to not give any more information to facebook than needed to keep my account active.

  17. If they’ll be reading the subvocalization signal, it won’t be saving any time compared to speaking.

  18. I’ve noticed that when I’m thinking, I think in two stages. First I form a concept of a sentence, and then I sort of repeat it out-loud inside my head. The 1st stage happens very quickly, almost instantaneously, but is rather hazy, and tends to fade away easily. The 2nd stage is much slower, about the same speed as external speech, but is much more concrete, and lets me remember and process that thought more fully. But if I concentrate, I’ve found that I can stop myself from repeating that initial concept, and still remember it in full. It’s difficult, but possible.

    I wonder how these two stages translate to brain activity. The 2nd stage feels the same as subvocalization, so it probably happens in the same part of the motor cortex where physical speech is controlled. Stopping it may be the same as subvocalization suppression during reading. But does the 1st stage occur in a different part of the brain? Is that 1st stage like the intent to move your hand, that you can then either translate into a motion or discard? Or is it some sort of pre-spike happening before the main spike? If the 2nd stage is subvocalization, then is the 1st stage the actual raw thought?

  19. And they expect people to use this to do food posts on FB?

    Im having a hard time taking this seriously.

    Where’s the Back to the Future image of Ol’ Doc Brown and his mind reading hat?

    Cant wait for the promotional material to come out of FB for this.

    Cant wait for the family dog to put it on.

    We supposed to believe that FB won’t harvest other thoughts while they are making posts?

    Sometimes I wish that instead of writing software that runs the world I wish I had a boss as clueless as Zuck so I could pitch ideas this dumb. “Imagine the time saved compared to speaking your words!!1!”

Comments are closed.