Google Takes Us Closer to Star Trek Holodecks with Light Field Video

Google is taking immersive media technology to the next level, showing a practical system for light field video. Wide field of view scenes can be recorded and played back with the ability to move around within the video after it has been captured, revealing new perspectives. Developed by a team of leading research scientists and engineers, the new research shows the ability to record, reconstruct, compress, and deliver high-quality immersive light field videos lightweight enough to be streamed over regular Wi-Fi, advancing the state of the art in the rapidly emerging field of immersive augmented reality (AR) and virtual reality (VR) platforms.

They have overcome a major obstacle in making virtual experiences realistic, immersive, distributable, and comfortable.

The team records immersive light field videos with a low-cost rig consisting of 46 action sports cameras mounted to a lightweight acrylic dome. Using DeepView, a machine learning algorithm developed last year by members of the same Google research team, they combine the video streams from each camera into a single 3D representation of the scene being recorded. Their paper introduces a new “layered mesh” representation that consists of a series of concentric layers with semi-transparent textures. Rendering these layers from back to front brings the scene vividly and realistically to life. This method solves the very difficult problem of synthesizing viewpoints that were never captured by the cameras in the first place, enabling the user to experience a natural range of head movement as they explore light field video content.

In recent years, the immersive AR/VR field has captured mainstream attention for its promise to give people a truly authentic experience in a simulated environment. Want to really feel like you’re standing among the Redwoods at Yosemite rather than sitting in the living room? Or watch an artist create a sculpture as if you’re with them in the studio? That could be possible with immersive AR/VR technology.

Although the field is still nascent, the team at Google has addressed important challenges, making major research headway in immersive light field video. The research team, led by Michael Broxton, Google research scientist, and Paul Debevec, Google senior staff engineer, plans to demonstrate the new system at SIGGRAPH 2020. The conference, which will take place virtually this year, brings together a wide variety of professionals who approach computer graphics and interactive techniques from different perspectives and continues to serve as the industry’s premier venue for showcasing forward-thinking ideas and research.

SOURCES Google, Siggraph 2020
Written by Brian Wang,

7 thoughts on “Google Takes Us Closer to Star Trek Holodecks with Light Field Video”

  1. This is the web. Everyone wants to be snarky.

    It’s the cool thing to do because it can display cleverness and even be funny sometimes (although most people aren’t very good at it and it just comes out sounding crabby, pessimistic, or flat out mean).

    But sometimes something is just plain cool all by itself.

  2. Huh, I would have expected Google to push for something like this on their Google Streetview capture vehicles, but with a mass of Lytro sensors. The former Lytro employees got sucked into Google…

  3. This is not AI – this is basically about a different method for recording and displaying 3D video called Lightfields.

  4. Google… no thanks… Anytime I see what Google is doing I think about the HBO show Westworld… Especially season 3 with the company Incite and their phone app, RICO.

Comments are closed.