Self-supervised Multi-class Pre-training for Unsupervised Anomaly Detection and Segmentation in Medical Images.

2021 
Unsupervised anomaly detection (UAD) that requires only normal (healthy) training images is an important tool for enabling the development of medical image analysis (MIA) applications, such as disease screening, since it is often difficult to collect and annotate abnormal (or disease) images in MIA. However, heavily relying on the normal images may cause the model training to overfit the normal class. Self-supervised pre-training is an effective solution to this problem. Unfortunately, current self-supervision methods adapted from computer vision are sub-optimal for MIA applications because they do not explore MIA domain knowledge for designing the pretext tasks or the training process. In this paper, we propose a new self-supervised pre-training method for UAD designed for MIA applications, named Multi-class Strong Augmentation via Contrastive Learning (MSACL). MSACL is based on a novel optimisation to contrast normal and multiple classes of synthetised abnormal images, with each class enforced to form a tight and dense cluster in terms of Euclidean distance and cosine similarity, where abnormal images are formed by simulating a varying number of lesions of different sizes and appearance in the normal images. In the experiments, we show that our MSACL pre-training improves the accuracy of SOTA UAD methods on many MIA benchmarks using colonoscopy, fundus screening and Covid-19 Chest X-ray datasets.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    46
    References
    0
    Citations
    NaN
    KQI
    []