Pupil segmentation in the light-field camera and its relation to 3D object positions and the reconstructed depth of field

2019 
A ray-trace simulation of the light-field camera is used to calculate point source responses as a function of 3D source positions. Each point source location yields a unique and well-determined segmented-pupil pattern in the lenslet array’s focal plane, with lateral object offsets changing the pattern’s location and symmetry, and defocus distances altering the pattern’s diameter. Segmented-pupil images can thus be used to infer point sources’ 3D locations. Numerical simulations show that the centroids and widths of segmented pupil images can be used to deduce lateral image positions to the size of a detector pixel, and image defocus to the accuracy of the lenslet focal length. In sparse-source cases, such as, e.g., fluorescence microscopy or particle tracking, 3D point-source locations can thus be accurately determined from the observed point source response patterns. The degree of pupil segmentation also directly constrains the ability to refocus light-field images—for image defocus distances large enough that the number of pupil segments exceeds the number of pixels within a “whole” pupil behind a single lenslet, the image can no longer be brought to focus numerically, thus defining the light-field camera’s depth of field. This constraint implies a depth of field larger than the usual imaging depth of focus by a factor of the number of detector pixels per lenslet, consistent with the general expectation.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    14
    References
    2
    Citations
    NaN
    KQI
    []