Abstract
Capturing exposure sequences to compute high dynamic range (HDR) images causes motion blur in cases of camera movement. This also applies to light-field cameras: frames rendered from multiple blurred HDR light-field perspectives are also blurred. While the recording times of exposure sequences cannot be reduced for a single-sensor camera, we demonstrate how this can be achieved for a camera array. Thus, we decrease capturing time and reduce motion blur for HDR light-field video recording. Applying a spatio-temporal exposure pattern while capturing frames with a camera array reduces the overall recording time and enables the estimation of camera movement within one light-field video frame. By estimating depth maps and local point spread functions (PSFs) from multiple perspectives with the same exposure, regional motion deblurring can be supported. Missing exposures at various perspectives are then interpolated.
Originalsprache | Englisch |
---|---|
Seiten (von - bis) | 33-42 |
Seitenumfang | 10 |
Fachzeitschrift | Computer Graphics Forum |
Jahrgang | 33 |
Ausgabenummer | 2 |
DOIs | |
Publikationsstatus | Veröffentlicht - Mai 2014 |
Extern publiziert | Ja |