The technology works by sending out a low-power infrared laser beam, which sweeps over an object or scene. Some light gets reflected back, though most is scattered in different directions. A detector measures how long it takes one particle of light, a photon, to return to the camera and is then able to calculate the distance from the system to the object. The technique can resolve millimeter-size bumps and changes in depth from hundreds of meters away.
The new camera takes advantage of superconducting nanowires, materials with almost no electrical resistance that have to be cooled to extremely low temperatures. These superconductors are very sensitive and can tell when just a single photon has hit them.
Although other approaches can have exceptional depth resolution, the ability of the new system to image objects like items of clothing that do not easily reflect laser pulses makes it useful in a wider variety of field situations
Pictures on the right were from 910 meters away
The technology is similar to LIDAR, a remote sensing technique that also uses laser light to measure the distance to different objects. By using infrared light, Buller’s camera is able to detect a wide variety of different items that don’t reflect laser beams well, like clothing. And the long-wavelength infrared light is safer than other lasers because it won’t harm people’s eyes when it scans them.
So what could this new camera be used for? Are Google’s Streetview cars now going to make drive-by 3-D models of every city? Will TSA agents figure out some new to watch you at the airport?
Buller said the technology could have a lot of different scientific applications. The system could be placed on airplanes and used to scan the vegetation in a forest, helping to determine the size and health of the plants. The team is also interested in making the camera work well underwater, which would allow people to scan the depth of oceans or lakes and determine their shape.
And the camera has obvious use in defense, for instance helping military drones better see targets in combat operations. The one drawback is that human skin doesn’t reflect infrared light well, so faces appear as black spots in the 3-D images. This offers a way to become invisible to the camera: Simply get naked.
“The superconducting detector has the potential to really be phenomenal,” said engineer Mark Itzler of Princeton Lightwave Inc, who was not involved in the work. It would probably find a wide variety of uses, he said.
But, he added, because superconductors have to be cooled to just a few degrees above absolute zero, it could be challenging to implement the technology. Buller agreed, and said his team was working to get the system to use more conventional semiconductors, like silicon, that would allow them to shrink the technology and better deploy it in the field.
The primary use of the system is likely to be scanning static, man-made targets, such as vehicles. With some modifications to the image-processing software, it could also determine their speed and direction.
One of the key characteristics of the system is the long wavelength of laser light the researchers chose. The light has a wavelength of 1,560 nanometers, meaning it is longer, or “redder,” than visible light, which is only about 380-750 nanometers in wavelength. This long-wavelength light travels more easily through the atmosphere, is not drowned out by sunlight, and is safe for eyes at low power. Many previous ToF systems could not detect the extra-long wavelengths that the Scottish team’s device is specially designed to sense.
The scanner is particularly good at identifying objects hidden behind clutter, such as foliage. However, it cannot render human faces, instead drawing them as dark, featureless areas. This is because at the long wavelength used by the system, human skin does not reflect back a large enough number of photons to obtain a depth measurement. However, the reflectivity of skin can change under different circumstances. “Some reports indicate that humans under duress—for example, with perspiring skin—will have significantly greater return signals,” and thus should produce better images, McCarthy says.
Outside of target identification, photon-counting depth imaging could be used for a number of scientific purposes, including the remote examination of the health and volume of vegetation and the movement of rock faces, to assess potential hazards. Ultimately, McCarthy says, it could scan and image objects located as far as 10 kilometers away. “It is clear that the system would have to be miniaturized and ruggedized, but we believe that a lightweight, fully portable scanning depth imager is possible and could be a product in less than five years.”
Next steps for the team include making the scanner work faster. Although the data for the high-resolution depth images can be acquired in a matter of seconds, currently it takes about five to six minutes from the onset of scanning until a depth image is created by the system. Most of that lag, McCarthy says, is due to the relatively slow processing time of the team’s available computer resources. “We are working on reducing this time by using a solid-state drive and a higher specification computer, which could reduce the total time to well under a minute. In the longer term, the use of more dedicated processors will further reduce this time.”
3-D images of a mannequin (top) and person (bottom) from 325 meters away. The left-hand panels show close-up photos of the targets taken with a standard camera. In the center are 3-D images of these targets taken by the scanner from 325 meters away. On the right is a color-coded map showing the number of photons that bounce off the targets and return to the detector, with black indicating a low number of photons. Notice that human skin does not show up well using the scanner: the mannequin’s face includes depth information, but the person’s face does not.
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.