QS-Hyper: A Quality-Sensitive Hyper Network for the No-Reference Image Quality Assessment

2021 
Blind/no-reference image quality assessment (IQA) aims to provide a quality score for a single image without references. In this context, deep learning models can capture various image artifacts, which made significant progress in this study. However, current IQA methods generally utilize the pre-trained convolution neural networks (CNNs) on classification tasks to obtain image representations, which do not perfectly represent the quality of images. In order to solve this problem, this paper uses semi-supervised representation learning to train a quality-sensitive encoder (QS-encoder), which can extract image features specifically for image quality. Intuitively, this feature is more conducive to train the IQA model than the feature used for classification tasks. Thus, QS-encoder is plunged into a carefully designed hyper network to build a quality-sensitive hyper network (QS-hyper) to solve IQA tasks in more general and complex environments. Extensive experiments on the public IQA datasets show that our method outperformed most state-of-art methods on both Pearson linear correlation coefficient (PLCC) and Spearman’s rank correlation coefficient (SRCC), and it made 3% PLCC improvement and 3.9% SRCC improvement on TID2013 datasets. Therefore, it proves that our method is superior in capturing various image distortions, which meets a broader range of evaluation requirements.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []