Several indicators point to the robotics industry being on an exponential growth path. Key enabling technologies, such as AI, energy storage, computer hardware, sensors, and actuators are steadily improving, and revenues are also increasing. Robots are now doing an increasing number of tasks that formerly could only by done by humans. Some are even starting to worry that robots may take some jobs, although others argue that the robotics industry will create as many jobs as it displaces. In an interview with Sander Olson, Qinetiq Chief Technology Officer Richard Wiesman discusses the major advances that have occurred in the robotics field during the past three decades, as well as the future of autonomous robots.
Richard Wiesman Question: Do you divide your time between the Field and Space Robotics Lab at MIT and the robotics corporation Qinetiq? I divide my time between being a “Professor of the Practice” at MIT and the FSRL and the Executive Vice President and Chief Technology Officer at QinetiQ North America. QinetiQ makes extensive use of technologies that are coming out of academic institutions and laboratories throughout the world
Question: How long has QinetiQ had a robotics program? I was the project director for the first robotics program in 1983. It was called the RAMROD program, carried out for the Navy, and it was designed to create a new generation of modular, bomb disposal robots. This was actually before QinetiQ existed, we were Foster-Miller corporation back then. Question: How have robots evolved since 1983? We have seen immense improvement in computational capabilities, such as better microcontrollers, hardware and software. It really is true that your iphone has more computing power than many computers from 1983. We have much better networking, man-machine interfaces, and machine-machine interfaces. We also have vastly improved RF, acoustic, optical, and visible and non-visible light sensors. And we have much better data storage devices. In all these areas we have also seen massive reductions in size, weight, and power consumption. Question: Does QinetiQ favor batteries, fuel cells, or combustion engines for powering robots? It is impossible to give a general answer to that – it depends on the specific robot, and the specific mission. Smaller robots are generally powered by batteries. Larger, ground based robots may also be powered by combustion engines, and they can benefit from the enormous energy that can be stored in gasoline and other fuels. For some robots, we may even use energy harvesting, such as solar cells. In the future, we expect to have much better batteries, as well as superior fuel cells. So improved energy storage options should greatly expand the usefulness of robots. Question: Does QinetiQ research autonomous robots? We have had a number of programs pertaining to autonomous robots, and we continue to make this area a priority. The increase in computing power allows us to design and create robots that are far more autonomous than previous versions. But in order to get more autonomous robots we will also require better software, and better sensors. And we also need better sensor fusion. Question: What do you mean by better fusion? We will need to fuse all of this data better. Think of all the information processing that takes place when you drive your car, all of the variables that must be accounted for. Your brain automatically filters out most of the extraneous noise. The main push for autonomous robots is for continuous assimilation of all of this disparate data, for interpreting the subtleties of a real-world environment. Question: Are most of your robots teleoperated? Yes, that has been the case since we began this work in 1983. Most of our robots are teleoperated, with a human being in the loop at all times. We use a variety of sensors to let the operator get as much telepresense as possible. Question: What sort of vision based sensor systems do QinetiQ’s robots possess? QinetiQ works with sea, land, and air based robots, so we employ a wide variety of vison based sensor systems. These include special optics, filters, multiple cameras, special lighting, and even enhanced sensing systems using infrared and LIDAR. Question: What is LIDAR? LIDAR is like a laser based radar. LIDAR is used for collision and obstacle avoidance, and was used for the DARPA Grand Challenge. We are constantly upgrading the capabilities of these sensors, and we anticipate some form of LIDAR eventually being employed on many of our land-based robots. Question: How do QinetiQ’s water-based robots navigate? Our water based robots make extensive use of sonar and acoustic sensing.. We use these techniques for recognizing obstacles and mapping the terrain. But we use optical techniques underwater as well. We are increasingly melding these different systems for sensing modalities. Question: Do you favor tracks, wheels, or limbs for robotic locomotion? The optimal locomotion system is highly dependant on the mission, the size of the robots, the payloads, and the terrain. We make both tracked and wheeled ground systems, and some of our robots can be reconfigured for either tracks or wheels. Tracks allow you to spread the weight of the object out, and are great for soft terrain such as mud. For hard terrain, wheels may be preferred. Question: Are you concerned that if Moore’s law ends it could cripple the nascent robotics industry? If Moore’s law slows, we still have huge advances that we can extract from software. There is vast room for improvement in the software of fusing sensor data, pulling signals out of noisy backgrounds, and the way we control things. So even if the rate of improvement of hardware slows, we can always create better software. A similar situation exists in sensors – we can always make more sophisticated and capable sensing devices. Question: When will we see the first fully autonomous robot? That depends upon one’s definition of “fully autonomous”. To me, fully autonomous means behaving in a manner similar to a human. Although we won’t see such a robot for many years, we are starting to see robots that can autonomously navigate, that can perform tasks without supervision, and that can make simple decisions. We will see more and more mission autonomy as computing power grows and as sensors improve. We will also see increasingly coordinated behavior among groups of robots. Question: What sort of features will we see in next-generation robots? Future robots will have fuller sensor suites, and they will be able to store more complete maps of where they have been as well as where they are going. We will be able to give the robots general commands, such as “go over there and open that door” or “go to that piece of ordnance and identify it.” Robots will work more seamlessly with human supervisors and will have more applications working alongside of humans. They will be able to make higher-level decisions, and will be able to perform some missions autonomously. And they will be able to coordinate their actions with other robots. Question: How much longer can the robotics field maintain its rapid growth rates? The robotics industry has been growing at a rapid pace for at least the past decade, and I don’t see that slowing down. All this will be driven by better sensors; better energy storage systems; better software for sensing, control, and communications; and better diagnostics and prognostics. The robotics industry isn’t dependant on the advance of one particular technology, and so will keep expanding even if one technology hits a wall. Question: What developments can we expect from the robotics industry by 2021? By 2021, we should see seamless integrations of robot teams. We will also see better integration of robots with human supervisors. These robots will assimilate vast amounts of data from their myriad sensors, and will be able to fuse this data to create a high degree of situational awareness. Robots will make many of their own decisions, will frequently share data with other robots, and will continuously monitor their own systems. By 2021 many of the most advanced 2011 robots will seem primitive by comparison.