To perform large-scale analyses of disease progression, it is necessary to automate the retrieval and alignment of MR images of similar contrast. The goal of this study is to create an algorithm that can reliably classify brain exams by MR image contrast. We use two modeling strategies (SVM and CNN) and two training/testing cohorts to compare within-disease and between-disease transferability of the algorithms. For both cohorts, deep ResNets for extract imaging features combined in a random forest with DICOM metadata perform the best, resulting in 95.6% accuracy on the within-disease comparison, and 99.6% overall accuracy on between-disease comparison.
This abstract and the presentation materials are available to members only; a login is required.