Robotics pioneer Rodney Brooks says a new generation of industrial robots could be enabled by better machine vision. A coming wave of industrial robots will be smart enough to work safely alongside humans in many different settings, says Rodney Brooks, a professor emeritus of robotics at MIT and a founder of iRobot.
“I think there’s room for a real revolution by putting sensors and computation into industrial robots,” says Brooks. “What if the robots were smarter and they could go into smaller companies and be easier for ordinary people to use?”
If manufacturing robots could recognize their human coworkers and interact with them safely Brooks says they could be used in many more manufacturing environments, assisting with repetitive and physically demanding manual tasks.
In 2008, Brooks founded a new company, called Heartland Robotics, to develop robots for manufacturing. The company has said that its robots will be intelligent, adaptable, and inexpensive. But the company is still in stealth mode, and hasn’t revealed what technologies these robots will use.
In the last few years, robotics researchers have made progress in machine vision, due in part to the falling cost of computer power, and the photo and image resources that can be pulled from the Web and used to train computer vision systems to recognize different objects. However, Brooks says, giving machines more human-like vision remains one of the biggest challenges to the development of more practical robots.
“Perception is really, really hard. For robots, I think it’s largely unsolved,” says Brooks. “Image-based recognition has worked surprisingly well, [but] it can’t do the recognition that a three-year-old child can do.”
Commercial machine vision systems are still usually focused on a narrow task. For example, some cars now come equipped with a system that can identify pedestrians and other vehicles, even in a cluttered scene. The system, developed by Mobileye, based in Israel, is connected to an onboard computer that applies the brakes if a collision seems imminent.
“This is the first wide-scale, highly demanding use of computer vision,” says Amnon Shashua, a professor of computer science at the Hebrew University of Jerusalem, and a cofounder of Mobileye.
Shashua says the company’s computer vision system works well because it only has to identify a handful of objects. But he hopes that within the next five years, the system will be able to reliably recognize almost everything within a scene. “There are at least 1,000 object classes you need to know in an image to at least do semi-autonomous driving,” including signs, lights, guard rails, poles, bridges, exits, and more, he said during a symposium on artificial intelligence at MIT last week.
Mobileye is developing specialized hardware to support the specific demands of rapid image recognition. “There’s still a long way to go to build hardware that is efficient, low cost, low power, that can do very complex computer vision,” Shashua adds.
Better machine vision systems might lead to significant advances in robotics. “How we deploy our robots is limited by what we can do with perception, so improvements in perception will lead them to be smarter and have modicums of common sense,” says Brooks.
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.