Deep feature derived from data-driven learning has consistently shown to outperform conventional texture features for lesion characterization. However, due to the slice thickness of medical imaging, through-plane has worse resolution than in-plane resolution. Therefore, the performance of deep feature extracted from the through plane slices may be worse, and their contributions to the final characterization may also be very limited. We propose an end-to-end super-resolution and self-attention framework based on generative adversarial network (GAN), in which the through-plane slices with low resolution are enhanced by learning the in-plane slices with high resolution to improve the performance of lesion characterization.
This abstract and the presentation materials are available to members only; a login is required.