Studies have shown that neuroimaging data (e.g., MRI, PET) and genetic data (e.g., SNP) are associated with the Alzheimer’s Disease (AD). However, to achieve a more accurate AD diagnosis model using these data is challenging, as these data are heterogeneous and high-dimensional. Thus, we first used region-of-interest based features and deep feature learning to reduce the dimension of the neuroimaging and SNP data, respectively. Then we proposed a deep cross-modal feature learning and fusion framework to fuse the high-level features of these data. Experimental results show that our method using MRI+PET+SNP data outperforms other comparison methods.
This abstract and the presentation materials are available to members only; a login is required.