Nature – Gigapixel camera catches the smallest details. In mass production, this will enable gigapixel cameras and video cameras for about $1000.
The new technology would let you take all of the pictures at once.
David Brady, an engineer at Duke University in Durham, North Carolina, and his colleagues are developing the AWARE-2 camera with funding from the United States Defense Advanced Research Projects Agency. The camera’s earliest use will probably be in automated military surveillance systems, but its creators hope eventually to make the technology available to researchers, media companies and consumers.
AWARE-2 sidesteps the size issue by using 98 microcameras, each with a 14-megapixel sensor, grouped around a shared spherical lens. Together, they take in a field of view 120 degrees wide and 50 degrees tall. With all the packaging, data-processing electronics and cooling systems, the entire camera is about 0.75 by 0.75 by 0.5 metres in volume.
The current version of the camera can take images of about one gigapixel; by adding more microcameras, the researchers expect eventually to reach about 50 gigapixels. Each microcamera runs autofocus and exposure algorithms independently, so that every part of the image — near or far, bright or dark — is visible in the final result. Image processing is used to stitch together the 98 sub-images into a single large one at the rate of three frames per minute.
“With this design, they’re changing the game,” says Nourbakhsh.
The Duke group is now building a gigapixel camera with more sophisticated electronics, which takes ten images per second — close to video rate. It should be finished by the end of the year. The cameras can currently be made for about US$100,000 each, and large-scale manufacturing should bring costs down to about $1,000. The researchers are talking to media companies about the technology, which could for example be used to film sports: fans watching gigapixel video of a football game could follow their own interests rather than the camera operator’s.
A one-gigapixel image (top) shows minute details (bottom) of the skyline in Seattle, Washington.
The challenge, says Michael Cohen, head of the Interactive Visual Media group at Microsoft Research in Redmond, Washington, is dealing with the huge amount of data that these cameras will produce.
The gigapixel camera that takes ten frames per second will generate ten gigabytes of data every second — too much to store in conventional file formats, post on YouTube or e-mail to a friend. Not everything in these huge images is worth displaying or even recording, and researchers will have to write software to determine which data are worth storing and displaying, and create better interfaces for viewing and sharing gigapixel images. “The technology for capturing the world is outpacing our ability to deal with the data,” says Nourbakhsh.
Voice Command Interfaces like Siri and Super high resolution Cameras
Voice Command Interfaces like Siri and Super high resolution cameras would enable the following scene from Bladerunner.
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.