Early Holdeck Level Real Life Holographic Videos

BYU’s holography research group has can create holographic lightsaber battles and spaceship battles of the Starship Enterprise and a Klingon Battle Cruiser that incorporate photon torpedoes launching and striking the enemy vessel that you can see with the naked eye.

The researchers believe they will be able to create immersive holographic video that surrounds the viewer with a perceived infinite display size.

Dan Smalley and his team of researchers who garnered national and international attention three years ago when they figured out how to draw screenless, free-floating objects in space. Called optical trap displays, they’re created by trapping a single particle in the air with a laser beam and then moving that particle around, leaving behind a laser-illuminated path that floats in midair; like a “a 3D printer for light.”

The development paves the way for an immersive experience where people can interact with holographic-like virtual objects that co-exist in their immediate space.

“Most 3D displays require you to look at a screen, but our technology allows us to create images floating in space — and they’re physical; not some mirage,” Smalley said. “This technology can make it possible to create vibrant animated content that orbits around or crawls on or explodes out of every day physical objects.”

To demonstrate that principle, the team has created virtual stick figures that walk in thin air. They were able to demonstrate the interaction between their virtual images and humans by having a student place a finger in the middle of the volumetric display and then film the same stick finger walking along and jumping off that finger.

“We can play some fancy tricks with motion parallax and we can make the display look a lot bigger than it physically is,” Rogers said. “This methodology would allow us to create the illusion of a much deeper display up to theoretically an infinite size display.”

Nature Scientific Reports – Simulating virtual images in optical trap displays

Optical trap displays (OTD) are an emerging display technology with the ability to create full-color images in air. Like all volumetric displays, OTDs lack the ability to show virtual images. However, in this paper we show that it is possible to instead simulate virtual images by employing a time-varying perspective projection backdrop.

The modified parallax does appear to create images perceived behind the drawing volume. Our calculated error supports the use of this method. The modified parallax, after accounting for bias, shows good agreement with simulation. This shows the potential effectiveness of increasing the display space of the volumetric display beyond the physical boundaries of the display. The increase of display volume by 80% in one dimension demonstrated here can be extrapolated to infinity, given an immersive display where the viewer is always looking through the display volume.

Limitations of this approach include (1) a lack of binocular disparity, (2) requirement of motion tracking of the viewer’s eye position, and (3) mismatch of accommodation/vergence and other visual cues. To the first limitation, this experiment was a monocular test. To be effective for normal-sighted human viewers, our approach must eventually be modified to also provide accurate binocular parallax. For binocular parallax to function, the OTD must be capable of controllable anisotropic scatter. To date, we have demonstrated anisotropic scatter and we have outlined two possible methods for exerting control over this directional scatter in the future that would allow for each eye of the user to receive a different perspective based on their respective spatial locations. With the possible future addition of directional output control, the method proposed here would become more effective without any additional changes needed. The second limitation is that this method requires the viewer to be tracked (specifically the viewer’s head). this is a significant encumbrance as normal OTD real images require no knowledge of the user’s position and still provide almost 4π steradians of view angle. However, we can say that once directional scatter has been achieved, tracking of the viewer could be omitted in at least two dimensions (horizontal and vertical). The angular outputs of the display having image points corresponding to the perspective from that position updated regardless of viewer presence. The third dimension of the viewer position, the distance of the viewer from the display, would still be needed for ideal perspective reconstruction as the perspective projection is based on a 3D observation point. Further pursuit of directional scattering control is thus capable of solving one major shortcoming of OTD technology at this time, reducing the complexity of the method presented here, and extending the usefulness of the method presented here to include independent virtual images for several viewers at once. The final limitation is that of mismatch between the accommodative cue, which leads the user to focus at the projection plane, and the parallax cue, which leads the viewer to focus at the perceived point. This stereopsis/accommodation mismatch is common in other systems sometimes causing adverse side effects to users. To mitigate it, we must place the perspective projection plane at a distance where parallax is more dominant than accommodation. This requirement is in harmony with the theatrical backdrop approach that we have proposed in this paper, especially given the relatively rapid drop-off of accommodation dominance with image distance.

We would argue that, these limitations notwithstanding, simulating virtual images with OTD would be preferable to the use of a hybrid OTD/holography system, which has been proposed. Unlike OTDs, holograms are extremely computationally intensive and their computational complexity scales rapidly with display size. The complexity also scales rapidly with point spread function. Neither is true for OTD displays. Consider a background of stars: regardless of the number of stars, a holographic display would require terabytes per second of data to provide the diffractive focusing power to render sharp star-like points, and the parallax and focus cues would be wasted given the extreme distance of the virtual points. By comparison, OTDs would only require a bandwidth proportional to the number of visible stars (1.8 Mb/s to represent the approximately 5000 visible stars).

SOURCES- Nature Scientific Reports, BYU
Written By Brian Wang, Nextbigfuture.com

14 thoughts on “Early Holdeck Level Real Life Holographic Videos”

  1. Now, I saw an article where a burst of light could change paint…cold water hot wheels style.

    Now, here is what I want: You put that paint on big starship toys…and this mechanism inside the toy so it actually shoots out the phaser and torpedo slots. When the shots hit…the paint on that spot reddens and blackens like phaser damage from Star Trek II! You put circular half sphere smart phone displays on the original TOS Enterprise as nacelle domes. The whole model skin may be a display one day.

    Perfect for toy ships

  2. well those post-departed entertainer holograms…
    there is a market — something more immersive?

  3. personal use vs military use vs entertainment industry could spawn various versions and investor options
    coming to a 'HoloPlex' near you

  4. agreed. its the stimulation of the senses not an arbitrary re-visioning of standard space-time that will draw the crowds.

  5. Yeah. I point you to Ready Player One as the likely pinnacle of AR-VR-HR. Though liquids and airbourne foams could have 'benefits'

  6. must we be surrounded at all times? perhaps objects with varying pixelation properties that we can interact with.

  7. Yes. Small and light – so need a breathing apparatus – 1/4-inch could provide some kind of comprehensible image. But make transparent then any color? Super light – fractions of a gram. What about that aerogel stuff?

  8. …or packing peanuts. Transparent, realigning, possibly-AI controlled packing peanuts. Already life is becoming the landing page of The Onion.

  9. Perhaps easier in a liquid or other medium which you could suspend a near-lattice with individual 'turn on-turn off' pixels' potential but could also flow through it easily. Perhaps the elements of the lattice are bonded loosely to allow one to flow between them and then have them re-form up – even -possibly- somehow individually propulsed. I suppose that I am imagining a deep 'activated transparent' ball pit where the balls could return to their place quickly…

  10. It will be a sad day when we no longer have 'tech that have a long way to go before commercialization'

  11. Two big drawbacks:

    (1) The objects are transparent luminous traces, which means that everything will look like a "ghost". Not what you want, really.

    (2) They would probably need to trace thousands if not millions of particles to make a real seen instead of just some cubic millimetres

    The first is the most difficult to solve, since its an innate property of their images.

  12. There were other previous technology demos showing light shows in mid air. But those used high power lasers to ignite air into small plasma explosions, producing so far monochrome points of light, which can look like 3D objects of light, by producing lots of them quickly enough.

    This one looks less dangerous than that, making images with particles moved with lasers, with far less power and in multiple colors.

    Also, images would be restricted to the 3D projector volume, while the others are good even for open air advertising (not that such visual pollution is even desirable).

Comments are closed.