The Defense Advanced Research Projects Agency (DARPA) has just completed a 5-year project called “Urban Photonic Sandtable Display”, or UPSD, that creates realtime, color, 360-degree 3D holographic displays. Without any special goggles, an entire team of planners can view a large-format (up to 6-foot diagonal) interactive 3D display.
3d Hologram sheets available commercially now
Zebra Imaging in Austin, Tex., sells holographic prints that at first glance look much like ordinary 2-by-3-foot pieces of plastic — until an LED flashlight is shined at them. Then the patterns, burned into the plastic with high-power laser beams, come to life, said Al Wargo, chief executive. Out of the surface springs a model of a complicated building or an intricate network of pipes and mechanical equipment.
Portable light stand collapses to fit in 8″ x 20″ tube. Includes turntable for print display.
No special eyewear is required to view the holographic prints, which typically cost $1,000 to $3,000 each. The company has also demonstrated moving holographic displays in prototype at conferences, Mr. Wargo said. (It introduced color holograms in September.)
At the University of Arizona in Tucson, Dr. Peyghambarian created his displays using 16 cameras. Software rendered the images in holographic pixels, and laser beams directed by the software recorded the information on a novel plastic that can be erased and rewritten in two seconds. Dr. Peyghambarian says that the group is working on speeding up the rate and expects versions to be in homes in 7 to 10 years. Slower versions may be useful far sooner, for example, for long-distance medical consultation.
UPSD assists team-based mission planning, visualization and interpretation of complex 3D data such as intelligence and medical imagery. It permits simultaneous viewing for up to 20 participants and is interactive, allowing the image to be frozen, rotated and zoomed up to the resolution limit of the data. The holographic display enables full visual depth capability up to 12 inches. The technology also enables realistic two-dimensional printouts of the 3D imagery that front line troops can take with them on missions.
UPSD is based on full-parallax technology, which enables each 3D holographic object to project the correct amount of light that the original object possessed in each direction, for full 360- degree viewing. Current 3D displays lack full-parallax and only provide 3D viewing from certain angles with typically only three to four inches of visual depth.
Presently UPSD is a scalable display platform that can be expanded from a six-inch diagonal size up to a six-foot diagonal, in both monochrome and color formats.
UPSD is part of DARPA’s broader efforts in 3D technology research. DARPA recently demonstrated a wide-area 3D LIDAR (Light Detection and Ranging) mapping capability under DARPA’s High Altitude LIDAR Operations Experiment (HALOE). HALOE is providing forces in Afghanistan with unprecedented access to high-resolution 3D data, collected at rates orders of magnitude faster and from much longer ranges than conventional methods. UPSD’s 3D display can support the rapid exploitation of this data for detailed mission planning in rugged, mountainous and complex urban terrain.
DARPA is initially transitioning the UPSD technology to an Air Force research center and two Army research centers to apply the technology to critical applications where the 3D holographic display will provide a unique benefit.
Zebra Imaging of Austin, Texas, was awarded the initial contract in 2005 and has researched and developed the technology.
Develop full-parallax digital three-dimensional (3-D) display with no moving parts, video rate imaging based on holograms or hogels, no special viewing apparatus, and using a gesture glove interface.
Visualization of inherently 3-D situations—such as deconfliction, intervisibility, air operations, satellite constellations, terrain/building structures, and complex battlespace data—is significantly hampered when projected onto a two-dimensional (2-D) medium. Despite many attempts based on a variety of approaches, all currently available true 3-D displays have unacceptable levels of visual artifacts, are far too dim, require too much space and power, and have inadequate user interfaces for interacting with 3-D imagery. Stereoscopic approaches are common but require special headgear, which causes discomfort and nausea in many users, diminishes luminance for all viewers, and precludes accessibility for multiple and/or unexpected viewers.
Autostereoscopic (no eyewear) 3-D systems based on the sequential placement of full 2-D perspective images into horizontal viewing zones (2 to 11 common) do not provide a simple walkaround capability have uncomfortably restricted viewing zones for even one person, cannot be updated fast enough to prevent image jitter, and cause nausea in most users for use longer than 15 min. Autostereoscopic systems based on volumetric approaches (e.g. spinning screen, laser-scanned cube, depth multiplex 2D) are too dim and too small to be useful. Autostereoscopic systems based on electronic holographic efforts have been too slow and too dim to be useful. Fortunately, recent advances in microprocessors, algorithms, communications, and gesture control technology have now made it possible to develop a compact full multiplex digital holographic display system with adequate performance for use in operational applications. Computational power to generate full multiplex holograms can be produced affordably by use of clusters of consumer personal computers and graphics rendering cards. The hologram pixel (sample of the 2-D hologram) should ideally be 500 nm or smaller in size and 14 bits in grayscale for adequate discrete representation. Alternatively, basis representations of holograms based on precompiled hologram element (hogel) basis sets require pixels of 20 µm or smaller compared to the 11-20 µm pitches now in production for MicroDisplays in a variety of MEMS, OLED, and LCD technologies. Nanoelectronic fabrication techniques now being matured by the integrated circuit industry at the 45-nm node, together with diffractive optics for pixel or hogel imaging, enable fabrication of hologram pixels (hpixel) across 100 sq inch of a 16-inch wafer. The resulting sampled hologram (70-giga-hpixels) might correspond to a true 3-D resolution of several megavoxels in a 30º field of view (FOV). The goal of this topic is to capitalize on this opportunity to begin to enable petabyte command and control databases to be visualized and controlled dynamically in 3-D with look-around in all directions with artifacts that are acceptable by long-term use operators. Gesture control of the imagery via a sensor-embedded glove is also envisioned to make user interaction with 3-D content intuitive. Solid state 3-D would enhance both ground and airborne displays, providing depth information in the cockpit and reducing ambiguity in ground based applications. The technology developed in this topic should be focused on comfortable long-term use by multiple simultaneous viewers in air, space, and cyberspace operations centers and be adaptable to airborne functions.
PHASE I: Design an FMHD capable of presenting, at a minimum, a full parallax monochrome image at any pupil position in a 30º FOV that is viewable in room illumination and controllable with a gesture (e.g. glove) interface. Develop a visual artifact reduction strategy and assess usability and comfort issues.
PHASE II: Fabricate and demonstrate a solid-state FMHD display system at video rate in a laboratory environment in a single color with a wearable dataglove interface. Define a pathway for integration into a tabletop multiperson team workstation form-factor that is scalable to wall size. Demonstrate pathways to full color, larger fields of view, and higher resolutions.
PHASE III / DUAL USE: Military application: Complex system visualization for air, space, and cyberspace situational awareness, planning, execution of missions in command and control centers; battlespace visualization, and medical research. Commercial application: Commercial air traffic control, computer-aided design, real-time functional magnetic resonant brain activity imaging, scientific data visualization, teaching, entertainment, and medical research.
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.