Regularization in MRI reconstruction often involves sparse representation of signals using linear combinations of dictionary atoms. In 'blind' settings, these dictionaries are learned during reconstruction from the corrupt/aliased images, using no training data. In contrast, 'Fully supervised' dictionary learning (DL) requires uncorrupted/fully sampled training images, and the learned dictionary is used to regularize image reconstruction from undersampled data. We combine the aforementioned DL frameworks to learn two separate dictionaries in a residual fashion to jointly reconstruct an undersampled image. Our algorithm, Super-BReD Learning, shows promising results on reconstruction from retrospectively undersampled data, and outperforms recent DL schemes.
This abstract and the presentation materials are available to members only; a login is required.