Deep learning (DL)-based approaches have shown promise for automating medical image segmentation with high efficacy. However, current state-of-the-art DL supervised methods require large extents of labeled training images, which are difficult to curate at scale. In this work, we propose a self-supervised training scheme to reduce dependence on labeled data by pretraining networks in an unsupervised manner. We show that our method can improve segmentation performance, especially in the context of very limited data scenarios (only 10-25% scans available) and can achieve or surpass the accuracy of state-of-the-art supervised networks with approximately 50% fewer labeled scans.
This abstract and the presentation materials are available to members only; a login is required.