Stanford researchers developing 3-D camera with 12,616 lenses


Stanford electronics researchers, lead by electrical engineering Professor Abbas El Gamal, are developing such a camera that makes a 2-D photo with an electronic “depth map” containing the distance from the camera to every object in the picture, a kind of super 3-D.

They it built around their “multi-aperture image sensor.” They’ve shrunk the pixels on the sensor to 0.7 microns, several times smaller than pixels in standard digital cameras. They’ve grouped the pixels in arrays of 256 pixels each, and they’re preparing to place a tiny lens atop each array.

Current cameras have expensive lens, the new system could de-emphasize lens and use on chip processing from more information for better image quality. So better 3d, better high resolution and cheaper cameras and a possibly better way to provide robots with 3D vision and 3D awareness.

If their prototype 3-megapixel chip had all its micro lenses in place, they would add up to 12,616 “cameras.”

Point such a camera at someone’s face, and it would, in addition to taking a photo, precisely record the distances to the subject’s eyes, nose, ears, chin, etc. One obvious potential use of the technology: facial recognition for security purposes.

But there are a number of other possibilities for a depth-information camera: biological imaging, 3-D printing, creation of 3-D objects or people to inhabit virtual worlds, or 3-D modeling of buildings.

The technology is expected to produce a photo in which almost everything, near or far, is in focus. But it would be possible to selectively defocus parts of the photo after the fact, using editing software on a computer

Knowing the exact distance to an object might give robots better spatial vision than humans and allow them to perform delicate tasks now beyond their abilities.

Other researchers are headed toward similar depth-map goals from different approaches. Some use intelligent software to inspect ordinary 2-D photos for the edges, shadows or focus differences that might infer the distances of objects. Others have tried cameras with multiple lenses, or prisms mounted in front of a single camera lens. One approach employs lasers; another attempts to stitch together photos taken from different angles, while yet another involves video shot from a moving camera.

But El Gamal, Fife and Wong believe their multi-aperture sensor has some key advantages. It’s small and doesn’t require lasers, bulky camera gear, multiple photos or complex calibration. And it has excellent color quality. Each of the 256 pixels in a specific array detects the same color.

The technology also may aid the quest for the huge photos possible with a gigapixel camera—that’s 140 times as many pixels as today’s typical 7-megapixel cameras. The first benefit of the Stanford technology is straightforward: Smaller pixels mean more pixels can be crowded onto the chip.

The second benefit involves chip architecture. With a billion pixels on one chip, some of them are sure to go bad, leaving dead spots, El Gamal said. But the overlapping views provided by the multi-aperture sensor provide backups when pixels fail.

The finished product may cost less than existing digital cameras, the researchers say, because the quality of a camera’s main lens will no longer be of paramount importance. “We believe that you can reduce the complexity of the main lens by shifting the complexity to the semiconductor,” Fife said.

FURTHER READING
I had discussed the laser based 3D Lidar based freezeframe technology for more autonomous robots

I had discussed other gigapixel camera systems

Stitching for terapixel images

An update on lens arrays for gigapixels and super resolution aerial photographs, besides the bugs eye lens there is mohawk version for longer and thinner shot coverage.