We developed a context-aware 2.5D Generative Adversarial Network (GAN) to generate synthetic CT images from MRI. Adjacent 2D slices with in plane matrix of 512 x 512 and user defined slice context (from 3 to 41-slices) were provided as input. This allows the network to learn out-of-plane information for the slice of interest thereby alleviating the intensity discontinuity problem seen in 2D networks. In addition, this approach uses less GPU memory than a 3D GAN. Our results indicated that the network trained with larger number of adjacent slices outperform the fewer slice network.
This abstract and the presentation materials are available to members only; a login is required.