We applied a video processing deep-learning neural network (originally developed to synthesize high resolution photorealistic video from a time series of dancing poses, semantically segmented street-view labels, or human face outline sketches) to Dixon imaging to present combined benefits of 2D and 3D networks. The developed Dixon Video Domain Transfer Generative Adversarial Network (DixonVDTGAN) could create slice-to-slice consistent water images with reduced demand on GPU memory. It could also successfully correct deep-learning processing errors for robust water and fat signal separation under two assumptions that the deep-learning processing errors are localized and the image phase is spatially smooth.
This abstract and the presentation materials are available to members only; a login is required.