Low-dimensional subspace models have recently been developed for fast, high-SNR MRSI, by effectively reducing the degrees-of-freedom for the imaging problem. However, low-dimensional linear subspace models may be inadequate in capturing more complicated spectral variations across a general population. This work presents a new approach to model general spectroscopic signals, by learning a nonlinear low-dimensional representation. Specifically, we integrated the well-defined spectral fitting model and a deep autoencoder network to learn the low-dimensional manifold where the high-dimensional spectroscopic signals reside, and applied this learned model for denoising and reconstructing MRSI data. Promising results have been obtained demonstrating the potential of the proposed method.
This abstract and the presentation materials are available to members only; a login is required.