Harvard researchers come up with a computational method for creating images with depth from a single lens.
Making 3D images is typically a bit of a production. One method is to use two lenses to create a stereoscopic image. Harvard researchers have come up with a way to create 3D images using a single lens without requiring new hardware.
The method works by taking two images with a camera or microscope. The camera doesn't move, but each image is focused at a different depth. Those images are then run through a computer that calculates the angle of light to each pixel. This allows the computer to determine what the image would look like if taken from a different position, the way the second lens on a 3D camera would see it.
The technique is called "light field moment imaging." A paper describing the research has been published in the Optics Letters journal.
The images can be stitched together to create a stereoscopic animation, as seen in the demonstration video below. What's so intriguing about this technology is that it can be used with a regular camera, opening up potential uses for viewing microscopic materials, photos, and eventually movies in 3D.
"When you go to a 3D movie, you can't help but move your head to try to see around the 3D image but, of course, it's not going to do anything because the stereo image depends on the glasses. Using light field moment imaging, though, we're creating the perspective-shifted images that you'd fundamentally need to make that work -- and just from a regular camera. So maybe one day this will be a way to use all of the existing cinematography hardware and get rid of the glasses. With the right screen, you could play that back to the audience, and they could move their heads and feel like they're actually there," says graduate student Antony Orth.