Simple Existing Camera Technology and How Privacy Died a Few Years Ago

Nextbigfuture has predicted ultra-high resolution cameras would become relatively common and high-resolution images and video would impact privacy.

In 2006, Brian Wang of Nextbigfuture made two predictions related to cameras, resolution and privacy.

Gigapixel cameras common was predicted for 2009-2015. Instead, there are parallel cameras and robotic camera mounts.
One billion digital video cameras posting online real-time; personal privacy is history was predicted for 2008-2012.

Robotic gigapixel camera mounts cost about $300-900 each. In 2012, there were robotic gigapixel camera mount for $484. Robotic gigapixel camera mounts are for taking a few hundred digital pictures and stitching them into a gigapixel image.

Nokia Lumia 1020 has 41 megapixels. Many cameras and phones are in the 15 to 50 megapixel range.

The Pan-Starrs Telescope has 60 CCD cameras that take 1.5 Gigapixels in a single shot.

In 2023, a camera on the Large Synoptic Survey Telescope situated on a mountaintop in Chile will begin full science operations. Weighing in at nearly 6,200 pounds, it will take the reins as the largest digital camera, leapfrogging the GPC2 with a whopping 3.2-billion-pixel resolution.

Various Online Gigapixel and Big Pixel Exploration of Ultra-High Resolution Photos

Bix Pixel studio in China can take 195 gigapixel photos.

Big Pixel can take one picture of downtown Shanghai and enable people to zoom in to people and faces. Minimum resolution for some level of facial recognition is at 21 pixels by 21 pixels. 195 pixels allows 440,000 X 440,000 pixel images to be used. Most faces are 21 centimeters by 21 centimeters. If each pixel was a centimeters then a 4.4 kilometer by 4.4 kilometer picture would still have basic facial recognition.

Digital Eye in the Sky

Digital eyes in the sky were already used in Iraq and Afghanistan by US forces to find IED bombers. They were used in a US city to find murderers. A high resolution camera is placed in a Cessna or long duration drone and the city is filmed so that one pixel is one person. The pixel or a person can be highlighted and tracked. Then all events in the field of view are Tivod and recorded. When an event of interest happens, the recording is rewound and any physical interaction with that point can be traced. You are physical object and if you can be seen from the sky your every movement could be tracked.

Persistent Surveillance Systems (PSS) flies a small Cessna aircraft 10,000 feet overhead. The surveillance planes are loaded up with specialized 192 megapixel cameras that could watch 25 square miles of territory, and it provided something no ordinary helicopter or police plane could: a Tivo-style time machine that could watch and record movements of every person and vehicle below.

Each 50 gigapixel camera could track 110 kilometers by 110 kilometers. About 1000 high-resolution camera arrays could cover the entire USA or China. Drones and camera arrays would make this relatively cheap.

Parallel Cameras

New parallel camera technology lets you use many cameras to take one high-resolution picture or video.

In 2012, the DARPA-DUke University AWARE-2 used 98 microcameras, each with a 14-megapixel sensor, grouped around a shared spherical lens. Together, they take in a field of view 120 degrees wide and 50 degrees tall.

A 2018 paper by the Duke-DARPA researchers indicates that the volume of gigapixel-parallel camera arrays has been reduced by over 100 times over 5 years. They expect the volume to be reduced by another 100 times in the next 5 years.

In considering moving to larger or smaller sensors, one must analyze what metrics may be improved. The mechanical overhead and electronic interfaces suggest that very small sensors will have a higher cost per pixel than 4 K sensors. But it is far from clear that 100 MP sensors have a lower cost per pixel or better noise performance than 4K. As with optics, there is some optimal array size at which the system cost per pixel will be minimized.

It is important to note that the actual sensor contributes relatively little to the cost or volume of current digital camera systems. For the AWARE cameras the sensor cost was less than 2% of the overall system cost, and the size, weight, and power of image processing, communications, and storage systems were vastly larger than that of the sensor itself. These subsystems are naturally parallelizable.

Aqueti, Inc. developed a software platform to allow video operation of AWARE cameras. AWARE cameras used field programmable gate arrays (FPGAs) to collect data from the microcameras. The FPGA’s required water cooling to process 6 frame per second images with 3 W per sensor capture power. Data compression and storage were implemented in a remote computer cluster, requiring nearly 1 Gb∕sensor∕second of transmission bandwidth between the camera head and the server. Real-time stitching and interactive video from this system used a CPU and network attached storage array requiring more than 30 W per sensor.

More recently, Aqueti has extended this software platform in the “Mantis” series of discrete lens array cameras. Mantis cameras use a NVidia Tegra TX1 “system on module” microcamera controllers. Each Tegra supports two 4 K sensors with 10 W power such that the system runs at 30 fps, with image processing and compression implemented in the camera head with 5 W power per sensor. The Mantis cameras produce 100 MP images coded in H.265 format with 10–25 MBs bandwidth to a remote render machine. While Mantis does not require camera head water cooling, as used in AWARE, the Mantis head dissipates 100 W power.

While the overall image processing and compression volume is decreased by over 100× relative to AWARE, the electronic system remains larger and more expensive than the optics.

The Aqueti Mantis 70 camera is an array of 18 narrow-field microcameras, each with a 25 mm focal length lens
and a 1.6 μm pixel pitch. Each uses a Sony IMX 274 color CMOS sensor. Sensor readout, ISP, and data compression are implemented using an array of NVIDIA Tegra TX1 modules with two sensors per TX1. Custom software is used to stream sensor data to a render machine, which produces real-time interactive video with less than 100 ms latency. The sensors are arrayed to cover a 73° horizontal FoV and a 21° vertical FoV. The instantaneous FoV is 65 μrad, and the fully stitched image has a native resolution of 107 MPs. The camera operates at 30 frames per second

Mantis Camera Arrays

Mantis cameras offer numerous advantages over even the most advanced HD or 4K cameras. They can radically scale up resolution without sacrificing frames per second.

Mantis parallel supercameras offer a distinct advantage in security and surveillance, with capabilities to solve complex needs for clients ranging from concert and sports venues to airports and cities. Mantis imaging systems eliminate the limitations of traditional video setups that rely on multiple, mechanical pan-tilt-zoom cameras. These conventional cameras can record only the targeted areas at the expense of missing crucial details and moments in the larger field of view. With the Mantis camera, you can record everything within the field of view. All of this media can be explored live and after it’s been stored. A week or months later, you can zoom in and review all the data.