Convolutional Neural Network Training for RGBN Camera Color Restoration Using Generated Image Pairs

2020 
RGBN cameras that can capture visible light and near-infrared (NIR) light simultaneously produce better color image quality in low-light-level conditions. However, these RGBN cameras introduce additional color bias caused by the mixing of visible information and NIR information. The color correction matrix model widely used in current commercial color digital cameras cannot handle the complicated mapping function between biased color and ground truth color. Convolutional neural networks (CNNs) are good at fitting such complicated relationships, but they require a large quantity of training image pairs of different scenes. In order to achieve satisfactory training results, large amounts of data must be captured manually, even when data augmentation techniques are applied, requiring significant time and effort. Hence, a data generation method for training pairs that are consistent with target RGBN camera parameters, based on an open access RGB-NIR dataset, is proposed. The proposed method is verified by training an RGBN camera color restoration CNN model with generated data. The results show that the CNN model trained with the generated data can achieve satisfactory RGBN color restoration performance with different RGBN sensors.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    29
    References
    3
    Citations
    NaN
    KQI
    []