A new imaging system could use opaque walls, doors or floors as ‘mirrors’ to gather information

MIT Media Lab researchers caused a stir by releasing a slow-motion video of a burst of light traveling the length of a plastic bottle. But the experimental setup that enabled that video was designed for a much different application: a camera that can see around corners.

The researchers describe using their system to produce recognizable 3-D images of a wooden figurine and of foam cutouts outside their camera’s line of sight. The research could ultimately lead to imaging systems that allow emergency responders to evaluate dangerous environments or vehicle navigation systems that can negotiate blind turns, among other applications.

The principle behind the system is essentially that of the periscope. But instead of using angled mirrors to redirect light, the system uses ordinary walls, doors or floors — surfaces that aren’t generally thought of as reflective.

The system exploits a device called a femtosecond laser, which emits bursts of light so short that their duration is measured in quadrillionths of a second. To peer into a room that’s outside its line of sight, the system might fire femtosecond bursts of laser light at the wall opposite the doorway. The light would reflect off the wall and into the room, then bounce around and re-emerge, ultimately striking a detector that can take measurements every few picoseconds, or trillionths of a second. Because the light bursts are so short, the system can gauge how far they’ve traveled by measuring the time it takes them to reach the detector.

Experimental set-up.

Nature Communications – Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging

The system performs this procedure several times, bouncing light off several different spots on the wall, so that it enters the room at several different angles. The detector, too, measures the returning light at different angles. By comparing the times at which returning light strikes different parts of the detector, the system can piece together a picture of the room’s geometry.

The data collected by the ultrafast sensor were processed by algorithms that Raskar and Velten developed in collaboration with Otkrist Gupta, a graduate student in Raskar’s group; Thomas Willwacher, a mathematics postdoc at Harvard University; and Ashok Veeraraghavan, an assistant professor of electrical engineering and computer science at Rice University. The 3-D images produced by the algorithms were blurry but easily recognizable.

Raskar envisions that a future version of the system could be used by emergency responders — firefighters looking for people in burning buildings or police determining whether rooms are safe to enter — or by vehicle navigation systems, which could bounce light off the ground to look around blind corners. It could also be used with endoscopic medical devices, to produce images of previously obscure regions of the human body.

The math required to knit multiple femtosecond-laser measurements into visual images is complicated, but Andrew Fitzgibbon, a principal researcher at Microsoft Research who specializes in computer vision, says it does build on research in related fields. “There are areas of computer graphics which have used that sort of math,” Fitzgibbon says. “In computer graphics, you’re making a picture. Applying that math to acquiring a picture is a great idea.” Raskar adds that his team’s image-reconstruction algorithm uses a technique called filtered backprojection, which is the basis of CAT scans.

Indeed, Fitzgibbon says, the real innovation behind the project was the audacity to try it. “Coming at it from both ends, from the raw scientific question — because, you know, it is kind of a scientific question: ‘Could we see around a corner?’ — to the extreme engineering of it — ‘Can we time these pulses to femtoseconds?’ — that combination, I think, is rare.”

In its work so far, Raskar says, his group has discovered that the problem of peering around a corner has a great deal in common with that of using multiple antennas to determine the direction of incoming radio signals. Going forward, Raskar hopes to use that insight to improve the quality of the images the system produces and to enable it to handle visual scenes with a lot more clutter.

The recovery of objects obscured by scattering is an important goal in imaging and has been approached by exploiting, for example, coherence properties, ballistic photons or penetrating wavelengths. Common methods use scattered light transmitted through an occluding material, although these fail if the occluder is opaque. Light is scattered not only by transmission through objects, but also by multiple reflection from diffuse surfaces in a scene. This reflected light contains information about the scene that becomes mixed by the diffuse reflections before reaching the image sensor. This mixing is difficult to decode using traditional cameras. Here we report the combination of a time-of-flight technique and computational reconstruction algorithms to untangle image information mixed by diffuse reflection. We demonstrate a three-dimensional range camera able to look around a corner using diffusely reflected light that achieves sub-millimetre depth precision and centimetre lateral precision over 40 cm×40 cm×40 cm of hidden space.

17 pages of supplemental information

If you liked this article, please give it a quick review on ycombinator or StumbleUpon. Thanks