Stanford scientists are developing a camera that will allow you to refocus your images after you have captured them, according to NewScientist.com.
In an ordinary digital camera, a sensor behind the lens records the light level that hits each pixel on its surface. If the light rays reaching the sensor are not in focus, the image will appear blurry. Now, Pat Hanrahan and his team at Stanford University have figured out how to adjust the light rays after they have reached the camera. They inserted a sheet of 90,000 lenses, each just 125 micrometres across, between the camera’s main lens and the image sensor. The angle of the light rays that strike each microlens is recorded, as well as the amount of light arriving along each ray. Software can then be used to adjust these values for each microlens to reconstruct what the image would have looked like if it had been properly focused. That also means any part of the image can be refocused – not just the main subject.
Wired.com says that the Stanford team calls the invention a light field camera.
It stems from early-20th-century work on integral photography, which experimented with using lens arrays in front of film, and an early-1990s plenoptic camera developed at MIT and used for range finding. By building upon these ideas, [Ren] Ng hopes to improve commercial cameras’ focusing abilities. Traditionally, light rays filter through a camera’s lens and converge at one point on film or a digital sensor, then the camera summarizes incoming light without capturing much information about where it came from. Ng’s camera pits about 90,000 micro lenses between the main lens and sensor. The mini lenses measure all the rays of incoming light and their directions of origin. The software later adds up the rays, according to how the picture is being refocused.
(Thanks to Didi S DubelyeW for the heads up.)