A deep neural network is presented to synthetically generate T2FLAIR weighted images from other standard neuroimaging acquisitions. Network performance improved with input images that share components with similar physical sources of contrast as the T2FLAIR contrast, while performance was degraded when disparate sources of contrast, like fractional anisotropy, were included. This suggests that a level of feature engineering is appropriate when building deep neural networks to perform style transforms with respect to MRI contrast, with input features containing shared physical sources of contrast with the desired output contrast. In the optimally trained network, pathology present in the acquired T2FLAIR images and not present in the training dataset was correctly reconstructed.
This abstract and the presentation materials are available to members only; a login is required.