Petapixel photography for cameras and imaging one million times beyond human vision and gigapixel television

Here is progress report on DARPAs work to achieve all seeing multicameras at the physical limit of space and time resolution. They are achieving vision 30 to 50 times beyond the limit of human vision. They have a plan and research paper on achieving petapixel imaging which would be one million times beyond human vision.

The AWARE 2 cameras were retofit with glass microcamera optics and improved and more compact electronics in April 2013. These cameras were used to image Duke commencement in May 2013 and other recent events. The AWARE-10 5-10 gigapixel camera is in production and will be on-line in August 2013. Significant improvements have been made to the optics, electronics, and integration of the camera. The AWARE series of multiple cameras show that optics and electronic sampling provide no barrier to camera information capacity. Rather, capacity is ultimately limited by photon flux and atmospheric turbulence. Communications and processing are the current limiting factors. Cost is rapidly being reduced. This imaging can be combined with Leap Motion sensing to achieve massive super detailed (micron scale) three dimensional motion awareness.

DARPA Multigigapixel camer evolution

The AWARE2 camera has evolved since its conception in 2011 and first gigapixel images in September 2011. Initial composites from the micro-cameras were based off the ray trace model, which varied from the actual camera. This caused overlap errors, uneven illumination, unregistered overlapped regions, and pointing mismatches. Auto focus, auto exposure, and live-view mode were unavailable, causing many micro-cameras to under preform. In early images dynamic range was forfeited until HDR tonemapping was added to the compositer.

Improved plastic optics are under development in order to keep the cost per micro-camera as low as possible.

The glass micro-optics image is shown below, at 100% zoom level. The edge sharpness is nearly identical to center.

They are fixing issues with edgesharpness and other image defects

The goal of this DARPA project is to design a long-term production camera that is highly scalable from sub-gigapixel to tens-of-gigapixels. Deployment of the system is envisioned for military, commercial, and civilian applications.

Ultimately, the goal of AWARE is to demonstrate that it is possible to capture all of the information in the optical field entering a camera aperture. The monocentric multiscale approach allows detection of modes at the diffraction limit. As discussed in “Petapixel Photography,” the number of voxels resolved in the space-time-spectral data cube is ultimately limited by photon flux. We argue in the “gigapixel television,” a paper presented at the 14th Takayanagi Kenjiro Memorial Symposium, that real-time streaming of gigapixel images is within reach and advisable

Petapixel Photography

The monochromatic single frame pixel count of a camera is limited by diffraction to the space-bandwidth product, roughly the aperture area divided by the square of the wavelength. We have recently shown that it is possible to approach this limit using multiscale lenses for cameras with space bandwidth product between 1 and 100 gigapixels. When color, polarization, coherence and time are included in the image data cube, camera information capacity may exceed 1 petapixel/second. This talk reviews progress in the construction of DARPA AWARE gigapixel cameras and describes compressive measurement strategies that may be used in combination with multiscale systems to push camera capacity to near physical limits.

Imagers have often been designed to match the limits of human acuity at 300 milliradian instantaneous field of view (ifov), 3 color channels and 30-60 frames per second. While this represents an apparently formidable 1 gigapixel/second of image data, cameras that greatly exceed human acuity are both desirable and feasible.

The AWARE series of multiscale cameras, constructed under the DARPA AWARE Wide Field of View Program, demonstrate that optics and electronic sampling provide no barrier to camera information capacity. Rather, capacity is ultimately limited by photon flux and atmospheric turbulence. In the near term, however, capacity is limited by communications and processing. In exploring real-time gigapixel image capture and streaming, we begin a process, common in the history of information technologies, of moving over successive generations toward fundamental limits, even as we explore and question what those limits may be.

Gigapixel television

Gigapixel television – We suggest that digitally zoomable media will emerge from the integration of broadcast television and interactive networks. We review progress in multiscale cameras, consisting of parallel arrays of microcameras behind a common spherical objective, and physical layer compressive measurement. Each of these technologies is essential to “zoomcast” media in which each viewer will be able to analyze events at the fundamental physical limits of spatial and temporal resolution.

This paper considers strategies to radically increase the information content of broadcast media.

As we enter a second century of broadcast media, our goal should be to capture and broadcast images, sound and data at the limits of physical space-time resolution rather than at the limits of human resolution.

Angular resolution of imaging systems is limited by atmospheric effects, but may exceed 30 to 50x human acuity at sporting events.

Over the past several years, our group has explored physical layer coding strategies to compress image data prior to digitization and thus reduce sensor bandwidth and power. We have been particularly successful in demonstrating real-time hyperspectral imaging using image plane modulation. Image plane modulation holds further promise for compressively coding focus and dynamic range. More recently, several groups have explored image plane modulation for video compression, which may be directly effective in reducing bandwidth and power in high pixel count cameras.

AWARE 10 avoids the optical artifacts observed in the first generation AWARE 2 design and achieves near diffraction-limited performance.

AWARE 10 also achieves substantial reductions in electronics volume per pixel. A nominal 4x reduction in volume is achieved by operating 8 sensors per microcamera control processor rather than 2 sensors per processor in AWARE 2.0. We anticipate that 10-100 AWARE 10 and updated AWARE 2 systems will be constructed in 2013 and 2014. These systems may be used to zoomcast gigapixel frames at up to 6 frames per minute. While this is far from the dream of video rate or faster zoomcasting, it represents a significant first step.

If you liked this article, please give it a quick review on ycombinator or StumbleUpon. Thanks

Petapixel photography for cameras and imaging one million times beyond human vision and gigapixel television

Here is progress report on DARPAs work to achieve all seeing multicameras at the physical limit of space and time resolution. They are achieving vision 30 to 50 times beyond the limit of human vision. They have a plan and research paper on achieving petapixel imaging which would be one million times beyond human vision.

The AWARE 2 cameras were retofit with glass microcamera optics and improved and more compact electronics in April 2013. These cameras were used to image Duke commencement in May 2013 and other recent events. The AWARE-10 5-10 gigapixel camera is in production and will be on-line in August 2013. Significant improvements have been made to the optics, electronics, and integration of the camera. The AWARE series of multiple cameras show that optics and electronic sampling provide no barrier to camera information capacity. Rather, capacity is ultimately limited by photon flux and atmospheric turbulence. Communications and processing are the current limiting factors. Cost is rapidly being reduced. This imaging can be combined with Leap Motion sensing to achieve massive super detailed (micron scale) three dimensional motion awareness.

DARPA Multigigapixel camer evolution

The AWARE2 camera has evolved since its conception in 2011 and first gigapixel images in September 2011. Initial composites from the micro-cameras were based off the ray trace model, which varied from the actual camera. This caused overlap errors, uneven illumination, unregistered overlapped regions, and pointing mismatches. Auto focus, auto exposure, and live-view mode were unavailable, causing many micro-cameras to under preform. In early images dynamic range was forfeited until HDR tonemapping was added to the compositer.

Improved plastic optics are under development in order to keep the cost per micro-camera as low as possible.

The glass micro-optics image is shown below, at 100% zoom level. The edge sharpness is nearly identical to center.

They are fixing issues with edgesharpness and other image defects

The goal of this DARPA project is to design a long-term production camera that is highly scalable from sub-gigapixel to tens-of-gigapixels. Deployment of the system is envisioned for military, commercial, and civilian applications.

Ultimately, the goal of AWARE is to demonstrate that it is possible to capture all of the information in the optical field entering a camera aperture. The monocentric multiscale approach allows detection of modes at the diffraction limit. As discussed in “Petapixel Photography,” the number of voxels resolved in the space-time-spectral data cube is ultimately limited by photon flux. We argue in the “gigapixel television,” a paper presented at the 14th Takayanagi Kenjiro Memorial Symposium, that real-time streaming of gigapixel images is within reach and advisable

Petapixel Photography

The monochromatic single frame pixel count of a camera is limited by diffraction to the space-bandwidth product, roughly the aperture area divided by the square of the wavelength. We have recently shown that it is possible to approach this limit using multiscale lenses for cameras with space bandwidth product between 1 and 100 gigapixels. When color, polarization, coherence and time are included in the image data cube, camera information capacity may exceed 1 petapixel/second. This talk reviews progress in the construction of DARPA AWARE gigapixel cameras and describes compressive measurement strategies that may be used in combination with multiscale systems to push camera capacity to near physical limits.

Imagers have often been designed to match the limits of human acuity at 300 milliradian instantaneous field of view (ifov), 3 color channels and 30-60 frames per second. While this represents an apparently formidable 1 gigapixel/second of image data, cameras that greatly exceed human acuity are both desirable and feasible.

The AWARE series of multiscale cameras, constructed under the DARPA AWARE Wide Field of View Program, demonstrate that optics and electronic sampling provide no barrier to camera information capacity. Rather, capacity is ultimately limited by photon flux and atmospheric turbulence. In the near term, however, capacity is limited by communications and processing. In exploring real-time gigapixel image capture and streaming, we begin a process, common in the history of information technologies, of moving over successive generations toward fundamental limits, even as we explore and question what those limits may be.

Gigapixel television

Gigapixel television – We suggest that digitally zoomable media will emerge from the integration of broadcast television and interactive networks. We review progress in multiscale cameras, consisting of parallel arrays of microcameras behind a common spherical objective, and physical layer compressive measurement. Each of these technologies is essential to “zoomcast” media in which each viewer will be able to analyze events at the fundamental physical limits of spatial and temporal resolution.

This paper considers strategies to radically increase the information content of broadcast media.

As we enter a second century of broadcast media, our goal should be to capture and broadcast images, sound and data at the limits of physical space-time resolution rather than at the limits of human resolution.

Angular resolution of imaging systems is limited by atmospheric effects, but may exceed 30 to 50x human acuity at sporting events.

Over the past several years, our group has explored physical layer coding strategies to compress image data prior to digitization and thus reduce sensor bandwidth and power. We have been particularly successful in demonstrating real-time hyperspectral imaging using image plane modulation. Image plane modulation holds further promise for compressively coding focus and dynamic range. More recently, several groups have explored image plane modulation for video compression, which may be directly effective in reducing bandwidth and power in high pixel count cameras.

AWARE 10 avoids the optical artifacts observed in the first generation AWARE 2 design and achieves near diffraction-limited performance.

AWARE 10 also achieves substantial reductions in electronics volume per pixel. A nominal 4x reduction in volume is achieved by operating 8 sensors per microcamera control processor rather than 2 sensors per processor in AWARE 2.0. We anticipate that 10-100 AWARE 10 and updated AWARE 2 systems will be constructed in 2013 and 2014. These systems may be used to zoomcast gigapixel frames at up to 6 frames per minute. While this is far from the dream of video rate or faster zoomcasting, it represents a significant first step.

If you liked this article, please give it a quick review on ycombinator or StumbleUpon. Thanks