Recently, we could show that MR-guided PET reconstruction can be mimicked in image space using a convolutional neural network which facilitates the translation of MR-guided PET reconstructions into clinical routine. In this work, we test the robustness of our CNN against the used input PET tracer. We show that training the CNN with PET images from two different tracers ([18F]FDG and [18F]PE2I), leads to a CNN that also performs very well on a third tracer ([18F]FET) which was not the case when the network was trained on images from one tracer only.
This abstract and the presentation materials are available to members only; a login is required.