Early Holdeck Level Real Life Holographic Videos

BYU’s holography research group has can create holographic lightsaber battles and spaceship battles of the Starship Enterprise and a Klingon Battle Cruiser that incorporate photon torpedoes launching and striking the enemy vessel that you can see with the naked eye.

The researchers believe they will be able to create immersive holographic video that surrounds the viewer with a perceived infinite display size.

Dan Smalley and his team of researchers who garnered national and international attention three years ago when they figured out how to draw screenless, free-floating objects in space. Called optical trap displays, they’re created by trapping a single particle in the air with a laser beam and then moving that particle around, leaving behind a laser-illuminated path that floats in midair; like a “a 3D printer for light.”

The development paves the way for an immersive experience where people can interact with holographic-like virtual objects that co-exist in their immediate space.

“Most 3D displays require you to look at a screen, but our technology allows us to create images floating in space — and they’re physical; not some mirage,” Smalley said. “This technology can make it possible to create vibrant animated content that orbits around or crawls on or explodes out of every day physical objects.”

To demonstrate that principle, the team has created virtual stick figures that walk in thin air. They were able to demonstrate the interaction between their virtual images and humans by having a student place a finger in the middle of the volumetric display and then film the same stick finger walking along and jumping off that finger.

“We can play some fancy tricks with motion parallax and we can make the display look a lot bigger than it physically is,” Rogers said. “This methodology would allow us to create the illusion of a much deeper display up to theoretically an infinite size display.”

Nature Scientific Reports – Simulating virtual images in optical trap displays

Optical trap displays (OTD) are an emerging display technology with the ability to create full-color images in air. Like all volumetric displays, OTDs lack the ability to show virtual images. However, in this paper we show that it is possible to instead simulate virtual images by employing a time-varying perspective projection backdrop.

The modified parallax does appear to create images perceived behind the drawing volume. Our calculated error supports the use of this method. The modified parallax, after accounting for bias, shows good agreement with simulation. This shows the potential effectiveness of increasing the display space of the volumetric display beyond the physical boundaries of the display. The increase of display volume by 80% in one dimension demonstrated here can be extrapolated to infinity, given an immersive display where the viewer is always looking through the display volume.

Limitations of this approach include (1) a lack of binocular disparity, (2) requirement of motion tracking of the viewer’s eye position, and (3) mismatch of accommodation/vergence and other visual cues. To the first limitation, this experiment was a monocular test. To be effective for normal-sighted human viewers, our approach must eventually be modified to also provide accurate binocular parallax. For binocular parallax to function, the OTD must be capable of controllable anisotropic scatter. To date, we have demonstrated anisotropic scatter and we have outlined two possible methods for exerting control over this directional scatter in the future that would allow for each eye of the user to receive a different perspective based on their respective spatial locations. With the possible future addition of directional output control, the method proposed here would become more effective without any additional changes needed. The second limitation is that this method requires the viewer to be tracked (specifically the viewer’s head). this is a significant encumbrance as normal OTD real images require no knowledge of the user’s position and still provide almost 4π steradians of view angle. However, we can say that once directional scatter has been achieved, tracking of the viewer could be omitted in at least two dimensions (horizontal and vertical). The angular outputs of the display having image points corresponding to the perspective from that position updated regardless of viewer presence. The third dimension of the viewer position, the distance of the viewer from the display, would still be needed for ideal perspective reconstruction as the perspective projection is based on a 3D observation point. Further pursuit of directional scattering control is thus capable of solving one major shortcoming of OTD technology at this time, reducing the complexity of the method presented here, and extending the usefulness of the method presented here to include independent virtual images for several viewers at once. The final limitation is that of mismatch between the accommodative cue, which leads the user to focus at the projection plane, and the parallax cue, which leads the viewer to focus at the perceived point. This stereopsis/accommodation mismatch is common in other systems sometimes causing adverse side effects to users. To mitigate it, we must place the perspective projection plane at a distance where parallax is more dominant than accommodation. This requirement is in harmony with the theatrical backdrop approach that we have proposed in this paper, especially given the relatively rapid drop-off of accommodation dominance with image distance.

We would argue that, these limitations notwithstanding, simulating virtual images with OTD would be preferable to the use of a hybrid OTD/holography system, which has been proposed. Unlike OTDs, holograms are extremely computationally intensive and their computational complexity scales rapidly with display size. The complexity also scales rapidly with point spread function. Neither is true for OTD displays. Consider a background of stars: regardless of the number of stars, a holographic display would require terabytes per second of data to provide the diffractive focusing power to render sharp star-like points, and the parallax and focus cues would be wasted given the extreme distance of the virtual points. By comparison, OTDs would only require a bandwidth proportional to the number of visible stars (1.8 Mb/s to represent the approximately 5000 visible stars).

SOURCES- Nature Scientific Reports, BYU
Written By Brian Wang, Nextbigfuture.com

Subscribe on Google News