Head motion is one of the major issues in neuroimaging. With the introduction of MR-PET scanners, motion parameters can now be estimated from two independent modalities acquired simultaneously. In this work, we propose a new data-driven method that combines MR image registration and PET data driven approach to model head motion during the complete course of MR-PET examination. Without changing the MR-PET acquisition protocol, the proposed method provides motion estimates with a temporal resolution of ~2 secs. Results on a phantom dataset show that the proposed method can significantly reduce motion artefact in brain PET images and improve image sharpness compared with the MR based methods.
This abstract and the presentation materials are available to members only; a login is required.