Researchers are working hard to harness the hands-free nature of Google Glass to improve the lives of those with compromised mobility, vision and hearing.
Glass has an expected on-sale date sometime in 2014. As a product still in its infancy, it recalls the iPhone’s early days as a smartphone with promise and woefully few apps. But while Glass’ full potential will be determined down the road, it already has distinguished itself as a potentially life-changing tool for the disabled.
Researchers in a range of disciplines are looking into ways to leverage Glass’ inherent advantage over the smartphone — its hands-free nature — to help those who navigate life with compromised mobility, vision and hearing. There’s even work being done to assist those with autism, using facial recognition software to help identify the emotions of others.
Perhaps not since the invention of text-to-voice and other speech-recognition software has a tech invention had such potential to help the disabled.
“Glass will be revolutionary for the disabled,” says Rosalind Picard, founder of the Affective Computing Research Group at Massachusetts Institute of Technology’s Media Lab, whose focus is autism and communication technology.
“With facial analytics, it’s possible to, with the subject’s approval, have Glass scan a face and put up a green light if the person is intrigued, yellow if they’re confused or red if they’re bored,” she says. Then, chuckling, she adds, “It could even whisper at you during that date, ‘Hey, she’s losing interest.'”
Picard says speech recognition is getting so good that a deaf person soon could see a real-time transcript of what a friend is saying in Glass’ prism. A person with limited vision could take walking directions from Glass through its bone-conducting speaker housed in the right temple.
Glass is being developed with input from the disabled.
Kane has been using Glass daily since August, mostly to snap photos on the go. But he’s bullish on the device’s potential to vastly improve quality of life for the blind. “Having something on your head that is pointing naturally in the direction you are looking is invaluable,” he says.
Jeff Bigham, who conducts research in the same area, agrees. “Imagine this,” he says, excitedly. “A blind person with Glass walks by a store and Glass recognizes it and announces what it is. Maybe that person didn’t notice that it changed from a restaurant to a dry cleaner, but now he knows. These are things the rest of us take for granted, but for a blind person, it’s truly powerful.
Bigham, an assistant professor at the Human-Computer Interaction Institute at Carnegie Mellon University in Pittsburgh, has developed software that in fact can do such things. A video he posted on YouTube shows a blind man wearing glasses walking through a room filled with equipment, and each time a piece comes into Glass’ view it describes the apparatus to the wearer.
Glass’ other big score comes in a simpler form. Bigham’s VizWiz is a smartphone-based project that has seen 5,000 blind users ask more than 70,000 visual questions ranging from “What’s this spot on my baby’s head?” to “Do I look nice?” — questions and photos that are then sent out to the Web and answered in less than a minute by live respondents working through Amazon Mechanical Turk.
Where smartphone-based VizWiz users have to contend with the inherent hassle of “using a handheld device while blind, Glass offers the chance to provide continuous, hands-free visual assistance,” Bigham says.
One blind Glass Explorer says the potential for greatness lies just beyond the product’s initial limitations.
“I’m a little frustrated with (Glass), not because it’s something I can’t use, but because with trivial modifications I would use it all the time,” says Sina Bahram, founder of disability-focused Prime Access Consulting in Cary, N.C., and a Ph.D. candidate in computer science at North Carolina State University. “It’s not pie in the sky. For me, Glass could be an amazing conduit to the outside world.”
Among his complaints are a volume control that is “embedded too far deep in the menu,” a hypersensitive temple touch pad and a “ban on facial recognition (out of privacy concerns) that really hurts those of us who are blind.”
Bahram is at least hoping for object recognition to become a reality. “Think about what it’s like for me to hail a cab,” he says. “Now, picture me wearing a device that sees a cab heading my way, and alerts me to it. That’s a paradigm shift.”
For the 287 million people in the US who are not in an institution.
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.