Google Arts & Culture and The Smithsonian's National Air and Space Museum worked with the team behind Light Fields, Google's cutting edge, new form of VR capture, to allow public access to the Space Shuttle Discovery in a way never seen before.
Discovery and Light Fields
We’ve always believed in the power of virtual reality to take you places. When you’re actually in a place, the world reacts to you as you move your head around; light bounces off surfaces in different ways and you see things from different perspectives. To help create this more realistic sense of presence in VR, we’ve been experimenting with Light Fields. The Smithsonian's National Air and Space Museum and Digitization Program Office gave Google access to Space Shuttle Discovery, providing an astronaut’s view inside the orbiter’s flight deck which has never been open to the public.
What are Light Fields?
Light Fields are a set of advanced capture, stitching, and rendering algorithms. Much more work needs to be done, but they create still captures that give you an extremely high-quality sense of presence by producing motion parallax and extremely realistic textures and lighting.
Capturing and Processing a Light Field
With Light Fields, nearby objects seem near to you—as you move your head, they appear to shift a lot. Far-away objects shift less and light reflects off objects differently, so you get a strong cue that you’re in a 3D space. And when viewed through a VR headset that supports positional tracking, Light Fields can enable some truly amazing VR experiences based on footage captured in the real world.
How Does it Work?
This is possible because a Light Field records all the different rays of light coming into a volume of space.
Structure
To record them, Google modified a GoPro Odyssey Jump camera, bending it into a vertical arc of 16 cameras mounted on a rotating platform.
Action
It takes about a minute for the camera rig to swing around and record approximately 1,000 outward-facing viewpoints on a 70 cm sphere. This gives us a 2-foot wide diameter volume of light rays, which determines the size of the head space that users have to move around in to explore the scenes once they are processed.
Rendering
To render views for the headset, rays of light are sampled from the camera positions on the surface of the sphere to construct novel views as seen from inside the sphere to match how the user moves their head. They’re aligned and compressed in a custom data set file that’s read by special rendering software Google implemented as a plug-in for the Unity game engine.