Army MIND lab is able to decode brain signals in real time

In an Army Research Laboratory facility here called “The MIND Lab,” a desktop computer was able to accurately determine what target image a Soldier was thinking about.

MIND stands for “Mission Impact Through Neurotechnology Design,” and Dr. Anthony Ries used technology in the lab to decode the Soldier’s brain signals.

Ries, a cognitive neuroscientist who studies visual perception and target recognition, hooked the Soldier up to an electroencephalogram – a device that reads brain waves – and then had him sit in front of a computer to look at a series of images that would flash on the screen.

There were five categories of images: boats, pandas, strawberries, butterflies and chandeliers. The Soldier was asked to choose one of those categories, but keep the choice to himself. Then images flashed on the screen at a rate of about one per second. Each image fell into one of the five categories. The Soldier didn’t have to say anything, or click anything. He had only to count, in his head, how many images he saw that fell into the category he had chosen.

When the experiment was over, after about two minutes, the computer revealed that the Soldier had chosen to focus on the “boat” category. The computer accomplished that feat by analyzing brainwaves from the Soldier. When a picture of a boat had been flashed on the screen, the Soldier’s brain waves appeared different from when a picture of a strawberry, a butterfly, a chandelier or a panda appeared on the screen.

“We want to create a solution where image analysts can quickly sort through large volumes of image data, while still maintaining a high level of accuracy, by leveraging the power of the neural responses of individuals,” he said.

Dr. Anthony Ries instructs Pfc. Kenneth Blandon on how to play a computer game, using only his eyes to control the direction of fire of a bubble-shooting cannon at Aberdeen Proving Ground, Md., Nov. 3, 2015. Ries is a cognitive neuroscientist, who studies visual perception and target recognition. Blandon is a mechanic with the 20th Chemical, Biological, Radiological, Nuclear and Explosives Command.

Our ability to collect and store imagery data has been surpassed by our ability to analyze it,” Ries said.

Ries thinks that one day the intelligence community might use computers and brainwaves, or “neural signals,” to more rapidly identify targets of interest in intelligence imagery, in much the same way the computer in his lab was able to identify pictures of “boats” as targets of interest for the Soldier who had chosen to focus on the “boats” category.

“What we are doing is basically leveraging the neural responses of the visual system,” he said. “Our brain is a much faster image processor than any computer is. And it’s better at detecting subtle differences in an image.”

Ries said that in a typical image analysis scenario, an analyst might have a large image to look over, and might accomplish that by starting at the top left and working his way down, going left to right. The analyst would look for things of interest to him. “It takes a long time. They may be looking for a specific vehicle, house, or airstrip – that sort of thing.”

What Ries and fellow researchers are doing is cutting such an image up into “chips,” smaller sections of the larger image, and flashing them on a screen in the same way the boats and pandas and butterflies appeared on the screen for the Soldier.

“The analyst sits in front of the monitor, with the electroencephalogram on measuring his brain waves,” Ries said. “All the little chips are presented really fast. They are able to view this whole map in a fraction of the time it would take to do it manually.”

The computer would then measure the analyst’s neural response to each chip viewed.

“Whenever the Soldier or analyst detects something they deem important, it triggers this recognition response,” he said, adding that research has shown that as many as five images per second could be flashed on the screen, while still getting an accurate neural response. “Only those chips that contain a feature that is relevant to the Soldier at the time – a vehicle, or something out of the ordinary, somebody digging by the side of the road, those sorts of things – trigger this response of recognizing something important.”

Images identified by the analyst’s mind as being of-interest would then be tagged for further inspection.

The automated system could greatly reduce the amount of time it takes to process an image, and that means that a larger number of images – more of that gathered intelligence data – can be processed sooner, so that it can more quickly be of value to Soldiers on the ground.

When Ries and his fellow researches cut a larger intelligence image into smaller parts and display them in rapid succession to an analyst, the analyst still has to look at the entire image – the same number of square inches of image overall. But Ries said that by cutting it up into smaller chips, and displaying it rapidly, they are taking much of the work out of accomplishing the analysis.

Instead of sliding his fingers over the image, or marking on it, or writing something, or typing, the analyst has only to think “of interest” or “not of interest.” And that kind of decision can be made almost instantly – and a computer hooked to an EEG can detect when that decision has been made, what the decision is, tag the image with the result, and then present the next image in just a split second.

Ghostbusters mindreading

SOURCES – Army, Youtube