In this work we present a deep learning solution for motion correction in brain MRI; specifically we approach motion correction as an image synthesis problem. Motion is simulated in previously acquired brain images; the image pairs (corrupted + original) are used to train a conditional generative adversarial network (cGAN), referred to as MoCo-cGAN, to predict artefact-free images from motion-corrupted data. We also demonstrate transfer learning, where the network is fine-tuned to apply motion correction to images with a different contrast. The trained MoCo-cGAN successfully performed motion correction on brain images with simulated motion. All predicted images were quantitatively improved, and significant artefact suppression was observed.
This abstract and the presentation materials are available to members only; a login is required.