Recent advances in deep learning and convolutional neural networks (CNNs) have shown promise for automatic segmentation in MR images. However, because of the stochastic nature of the training process, it is difficult to interpret what information networks learn to represent. In this study, we explore how differences in learned weights between networks can be used to express semantic relationships between different tissues. For cartilage and meniscus segmentation in the knee, we show that network generalizability for segmenting tissues can be measured by distances between networks. We also use these findings to motivate robust training policies for fine-tuning with limited data.
This abstract and the presentation materials are available to members only; a login is required.