Generalizable No-Reference Image Quality Assessment via Deep Meta-learning

2021 
Recently, researchers have shown great interest in using convolutional neural networks (CNNs) for no-reference image quality assessment (NR-IQA). Due to the lack of big training data, the efforts of existing metrics in optimizing CNN-based NR-IQA models remain limited. Furthermore, the diversity of distortions in images result in the generalization problem of NR-IQA models when trained with known distortions and tested on unseen distortions, which is an easy task for human. Hence, we propose a NR-IQA metric via deep meta-learning, which is highly generalizable in the face of unseen distortions. The fundamental idea is to learn the meta-knowledge shared by human when evaluating the quality of images with diversified distortions. Specifically, we define NR-IQA of different distortions as a series of tasks and propose a task selection strategy to build two task sets, which are characterized by synthetic to synthetic and synthetic to authentic distortions, respectively. Based on these two task sets, an optimization-based meta-learning is proposed to learn the generalized NR-IQA model, which can be directly used to evaluate the quality of images with unseen distortions. Extensive experiments demonstrate that our NR-IQA metric outperforms the state-of-the-arts in terms of both evaluation performance and generalization ability.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    50
    References
    1
    Citations
    NaN
    KQI
    []