Application of Conditional Adversarial Networks for Automatic Generation of MR-based Attenuation Map in PET/MR

2018 
Current PET/MR imaging systems use methods based on MR image segmentation with subsequent assignment of empirical attenuation coefficients for attenuation correction in PET image reconstruction. Delineation of bone in MR images has been challenging, especially in the head and neck areas, due to the difficulty of separating bone from air. In this work, we study deep learning techniques to automatically generate attenuation maps directly from MR images, with focus on the head and neck areas. We use a generative adversarial network (GAN) in a conditional setting for this image translation task. GANs separate the deep learning network into a generator, which tries to generate fake examples, and a discriminator, which learns to distinguish between real and fake examples, and train the two networks simultaneously. The objective function of the conditional GAN is a combination of the generator loss, discriminator loss, and the L1 distance between the label image and the output image from the generator. Image pairs of PET/MR image (input) and corresponding PET/CT based 511 keV photon attenuation map (label) are used to train the network. The network is trained for 6k iterations. The generator loss is trained from 2.0 to 1.4. The discriminator loss is trained from 0.6 to 1.0. The L1 loss is trained from 0.2 to 0.1. In our previous work with a basic autoencoder network for the conversion from MR images to corresponding attenuation maps, if we convert the defined L2 loss for the network to the RMS pixel value prediction error, the autoencoder network, after training, has a pixel prediction error of around 0.2 when all pixel values are scaled from 0 to 1. In this study, the L1 loss also represents this pixel prediction error, which is around 0.1 after the network is trained. This indicates an average reduction of 50% of the pixel prediction error.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    6
    References
    1
    Citations
    NaN
    KQI
    []