Gizmodo interviewed John Lizzi, Manager of the Distributed Intelligent Systems Laboratory at GE Global Research, about General Electric’s approach to the future of robotics.
If you look at the practical applications of robotics, the vast majority of robots that you see today are in factories doing high precision, high speed work, such as sticking, placing, grinding, deburring, painting, that sort of thing. The first generation robots are extremely good for a lot of things, but they’re not very aware of their environments; they’re not adapted to work around humans. They’re in cages, where they’re separated from humans, and they’re extremely expensive in terms of both the robots and the support equipment around them.
The new generation of robotics is riding a lot of trends, such as Moore’s Law, so we can put more intelligence on the robot itself. The costs of computation and sensors are coming down. There’s this whole movement around collaborative robotics with robots that are very easily taught, very cheap, and very able to work closely with humans, while employing new technology, such as SLAM (Simultaneous Localization And Mapping) for autonomous vehicles and similar applications. These are coming together and allow us to let the robots out of those cages and working in more dynamic, more unconstrained environments that I’ve mentioned.
Are we talking about a robotic apprentice that would be working alongside a more skilled human?
We see a range of different things. For example, there are applications around assembly. There are some tasks that humans are really good at and it will take some time to catch up with these. There’s dexterity, manipulating small things, creative tasks in terms of assembly. We still want humans to do what they’re better at, but there are others where robots could do a lot of work on things like going and grabbing parts, handing the person tools, transporting materials from one place to another. There’s a lot of places where we can leverage the skills of both humans and robots.
You can also think of it as a contest of nonhuman skills. Imagine a larger robot that could give a human superhuman strength or imagine something large that GE makes that a human could use a robot to guide into place.
When we get to the more non-factory industry environments, such as power plants, you could have a robot being an apprentice to the human. The human and robot could start by working together and then over time the robot could start learning some things and that robot will start doing some of that more proactively, while the human focuses on other tasks.
There’s also task coordination, such as repairing an asset and the robot could be collaborating with the human by providing extra physical or virtual capabilities.
The robots could also be an advance team. Instead of having engineers sitting out there all the time, there’s no reason why we couldn’t have the robot out there doing the work and being sort of the front for servicing the asset or taking a look at what happened.
GE sees a future of robots roving around being able to identify what is outside normal operating parameters
What do you see as the big hurdle that needs to be overcome to create these service robots?
I think the big challenge is going to be in building more sophisticated robot perception, more accurate manipulation for the robot, and for how the robot can validate and verify autonomous systems. It’s easy to do this where you can control the environment, but open this up and it becomes more difficult to validate and verify; it becomes more and more of a challenge.
SOURCES – Gizmodo, GE Global Research