Towards Learning Low-Light Indoor Semantic Segmentation with Illumination-Invariant Features

2021 
Abstract. Semantic segmentation models are often affected by illumination changes, and fail to predict correct labels. Although there has been a lot of research on indoor semantic segmentation, it has not been studied in low-light environments. In this paper we propose a new framework, LISU, for Low-light Indoor Scene Understanding. We first decompose the low-light images into reflectance and illumination components, and then jointly learn reflectance restoration and semantic segmentation. To train and evaluate the proposed framework, we propose a new data set, namely LLRGBD, which consists of a large synthetic low-light indoor data set (LLRGBD-synthetic) and a small real data set (LLRGBD-real). The experimental results show that the illumination-invariant features effectively improve the performance of semantic segmentation. Compared with the baseline model, the mIoU of the proposed LISU framework has increased by 11.5%. In addition, pre-training on our synthetic data set increases the mIoU by 7.2%. Our data sets and models are available on our project website.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    15
    References
    0
    Citations
    NaN
    KQI
    []