Speeding swarms of sensor robots

One of two Slocum gliders owned and operated by the USC Center for Integrated Networked Aquatic PlatformS (CINAPS). Image: Smith et al.

A new algorithm ensures that robotic environmental sensors will be able to focus on areas of interest without giving other areas short shrift.

At the Institute of Electrical and Electronics Engineers’ International Conference on Robotics and Automation in May, MIT researchers will present a new algorithm enabling sensor-laden robots to focus on the parts of their environments that change most frequently, without losing track of the regions that change more slowly. At the same conference, they’ll present a second paper describing a test run of the algorithm on underwater sensors that researchers at the University of Southern California (USC) are using to study algae blooms.

The work of Daniela Rus, a professor of computer science and electrical engineering, and postdocs Mac Schwager and Stephen Smith (now an assistant professor at the University of Waterloo in Ontario), the algorithm is designed for robots that will be monitoring an environment for long periods of time, tracing the same routes over and over. It assumes that the data of interest — temperature, the concentration of chemicals, the presence of organisms — fluctuate at different rates in different parts of the environment. In ocean regions with strong currents, for instance, chemical concentrations might change more rapidly than they do in more sheltered areas.

In its current version, the algorithm assumes that researchers already have a mathematical model of the rates at which conditions change in different parts of the environment. The algorithm simply determines how the robots should adjust their velocities as they trace their routes. For instance, given particular rates of change along a route, would it make more sense to make one pass in an hour, slowing down considerably in areas of frequent change, or to make four or five passes, collecting less detailed data but taking more regular samples?

“From a practical point of view, it seems like an easy problem,” says Calin Belta, an assistant professor of mechanical engineering, systems engineering and bioinformatics at Boston University, who was not involved in the research. But it turns out to be a monstrously complex calculation. “It’s very hard to come up with a mathematical proof that you can really optimize the acquired knowledge,” he adds.

The MIT researchers draw an analogy with dust accumulating on a floor — dust that’s cleared whenever a sensor passes nearby. Because environmental change occurs at different rates in different areas, the dust piles up unevenly. The researchers were able to show that, with their algorithm, the height of the piles of dust would never exceed some limit: Only so much change could occur in any area before the sensor would measure it.

At the moment, the algorithm depends on either some antecedent estimate of rates of change for an environment or researchers’ prioritization of regions. But in principle, a robotic sensor should be able to deduce rates of change from its own measurements, and the MIT researchers are currently working to modify the algorithm so that it can revise its own computations in light of new evidence. “That’s going to be a hard problem as well,” Belta says. “But they have the right background, and they’re strong, so I think they might be able to do it.”

The researchers also envision that the algorithm could prove useful for fleets of robots performing tasks other than environmental monitoring, such as tending produce, or — in a more literal application of the vacuuming-dust metaphor — cleaning up environmental hazards, such as oil leaking from underwater wells.

If you liked this article, please give it a quick review on ycombinator or StumbleUpon. Thanks