Best exposed

Light-field photography has the potential to radically revolutionize everything that we associate with digital imaging today. Oliver Bimber, head of the Institute of Computer Graphics at Johannes Kepler University Linz, provides an insightful glimpse into a future beyond the realm of 3D television and megapixel digital cameras. Light fields make it possible to generate pictures […]

| | |

Light-field photography has the potential to radically revolutionize everything that we associate with digital imaging today. Oliver Bimber, head of the Institute of Computer Graphics at Johannes Kepler University Linz, provides an insightful glimpse into a future beyond the realm of 3D television and megapixel digital cameras.

Light fields make it possible to generate pictures that could not realistically be taken using conventional optics. The example shows how a light-field photograph simulates a snapshot taken by a camera with an aperture of approximately one meter. The minimal depth of field that results from an aperture with such an enormous diameter means that objects in a scene’s foreground almost completely disappear when the camera is focused on objects in the background. Picture credit: Institute of Computer Graphics at Johannes Kepler University Linz

Prof. Bimber, do you regard light-fields as the future of digital imaging? How are we to picture this?

Up to now, our three-dimensional world is usually depicted two-dimensionally. This is the case, for instance, in display systems like television, imaging systems like photography, and digital image processing. As you surely realize, two-dimensional images don’t deliver an ideal representation of our world, not even after the transition from analog technologies to the digital ones we have today and use almost exclusively—for instance, digital photography and digital display screens.

Now, the question is: What would be a better form of representation? Surely the first thing that comes to mind is a three-dimensional form of representation like images generated by a depth-sensing camera such as Microsoft Kinect, or those produced by stereoscopic displays like 3D television. Unfortunately, this isn’t a complete representation either, since it’s limited to purely diffuse surfaces that reflect light uniformly irrespective of the direction of the reflection. But common, everyday objects transport light in a complicated way. Consider, for example, transparent objects or reflective surfaces. Here, depth-sensing cameras no longer work.

So, if representing a real scene appropriately and completely is what you’d like to do, then you have to consider the scene’s entire light transport process. That is, you have to know the brightness and color of the light that is reflected from any three-dimensional point in the scene in any direction whatsoever. Directions can be represented spherically with two dimensions. In the field of physics, this combination of 3D and 2D functions is called the plenoptic function. It completely describes a 3D scene in five dimensions since three would be insufficient. Accordingly, the objective is to replace digital images that are merely 2D projections of the plenoptic 5D functions with plenoptic function. If you had a screen that, instead of 2D images, displayed the plenoptic function of the scene in 5D, then you would have a perfect three-dimensional display. Stereoscopic displays can’t do anything even close to that. A camera that registers the 5D plenoptic function would be a perfect imaging system. Unfortunately, it’s impossible to completely scan the plenoptic function with cameras or to display them on screens.

Light fields are 4D excerpts of the 5D plenoptic function and thus the closest approximation of them. They are considerably more powerful than 2D images or the 3D data of a depth sensor.

If we consider the possibilities of how 2D images, imaging & display systems could be enhanced and upgraded, then light-field technology would be the logical direction in which to go. It’s almost as detailed and comprehensive as holography but nowhere near as complex.

Light fields thus have the potential to radically change everything that we associate with digital imaging today, and that’s quite a lot—not only photography but also imaging systems in general, digital image processing, visualization, computer graphics, display systems and television technology. In the future, all of them could be based on light fields instead of digital images. The advantages are immense: expanded photographic possibilities, perfect three-dimensional TV without special goggles and for any number of viewers, enormous improvements in digital image processing, and much more.

How does a light-field camera record so much additional information?

There are a whole series of possibilities to record light fields. For starters, there are cameras with an encoded aperture, which is not simply an opening of a particular diameter but rather a binary or grey-tone pattern. At the top of the line are camera arrays, large matrices made up of hundreds of small individual cameras. The most common variant at the moment is the use of a micro-lens matrix in front of the actual image sensor. The micro-lenses reproduce the light that passes through them into the camera in a 4D encoding on the image sensor. With a normal camera, you merely photograph 2D location information, which means that a pixel corresponds to a location. A light-field camera, on the other hand, registers 2D location information as well as 2D directional information. Thus, the camera records, on a discrete basis, the individual rays of light, and the image sensor registers for each ray of light the point of intersection and the direction it’s coming from. After the shot has been taken, the 4D information can be processed and modified.

What elementary techniques is this based on and how long have they been around?

As I said, the use of light-field technology won’t be limited to the field of photography. This is much more fundamental. Consider all the applications today involved in recording, displaying, processing, modifying and computing digital images. All of that will be impacted by light-field technology. Physicists have long been aware of the principle of the plenoptic function. In the computer science field, the first light-field cameras were developed in 1996. Now, there are already several light-field cameras on the market for a variety of application areas such as photography, industrial image processing, light-field displays, light-field lighting systems, sensors, and visualization & rendering techniques based on light fields.

With the enormous amount of information that will be contained in a photograph, will computer memory storage capacity once again become an issue?

Let’s forget about the picture for a moment and talk about the difference between 2D and 4D light fields in order to avoid creating the impression that this is simply a matter of photography. Today, we have high-definition digital images such as photographs and television pictures; their size is in the megabyte range. Since light fields don’t contain pixels but rather ray information, their degree of definition can no longer be specified by a number of megapixels. Light fields feature a giga-ray degree of definition; their size is in the gigabyte range. This means that the quantum leap from digital images to digital light fields is from a magnitude of a million to a billion, in definition and in size.

The question of storage capacity and processor performance might thus become an issue, since enlargement is much faster than has been the case up to now for the definition of images. Among the solutions we’re now working on is an intelligent caching process that makes it possible to deal with such gigantic quantities of data.

But this ultimately brings us to a key point. Photo sensors are offering constantly higher definition, and digital cameras feature sensors capable of 50 megapixels, but why do we actually even need such high definition? Certainly not for so-called normal applications. But sensible use could indeed be made of the high resolution of photo-sensors in combination with multiplexed information. After all, that’s exactly how it works when you use a color filter in front of the sensor. But here, a light-field camera wouldn’t multiplex colors but rather directions. Accordingly, a simple light field would be no larger than, for instance, a 2D image with a 50 megapixel resolution. This is what application developers in light-field photography are currently attempting to achieve. In the future, though, light fields will be considerably larger, and our memory chips and processors hopefully too.

Where do you see room for improvement in the light-field cameras currently available?

As far as the cameras now on the market are concerned, in their price-resolution ratio. There are reasonably priced cameras ($200-$300), but they support spatial resolution of less than 1 MP. Others cameras support spatial resolution of at least 10 MP, but they sell for more than €35,000. You have to keep in mind that, in both cases, for every location it also has a resolution in that direction (4D). Here, 10 MP spatial resolution means that actually a 30 MP picture sensor was used, and the rest of the pixel is stored in directional information.

But, as I said: light fields are more than just cameras and photography. Nevertheless, there’s a much bigger problem than that of the hardware: the processing of light fields. It’s impossible to use today’s image processing techniques on light fields. Here’s where we really do have to reinvent the wheel, which is precisely what we’re doing here in Linz. All the algorithms and the mathematical fundamentals of image processing have to be reconceived for light fields. After all, we want to be able to do at least as much with light fields as we can do with pictures today—and hopefully a whole lot more. Thus, the software poses more of a problem than the hardware does.

Das weltweit erste Panorama-Lichtfeld mit einer Auflösung The world’s first panorama light field with a resolution of 2.54 gigarays (2.54 billion bits of light ray information) and spatial resolution of 22 megapixels (17,885×1,275 pixels) makes it possible to, among other things, alter the focus of a previously taken picture via mouse-click. Image credit: Institute of Computer Graphics of Johannes Kepler University Linz

Note: On Thursday, January 31, 2013 at 8 PM in the Ars Electronica Center, Oliver Bimber will be the featured speaker at Deep Space LIVE: “Light Fields – The Future of Digital Imaging.”