This work demonstrates the use of recurrent generative spatiotemporal autoencoders to predict up to fifteen future frames of abdominal DCE-MRI video data, starting with only three ground truth input frames for context. The objective is to predict what healthy patient video data and organ-specific contrast curves look like, to expedite anomaly detection and enable pulse sequence optimization. The model in this study shows promise; it was able to learn contrast changes without losing structural resolution during training time, and lays the foundation for future work.
This abstract and the presentation materials are available to members only; a login is required.