Recently, deep learning (DL) has emerged as a means for improving accelerated MRI reconstruction. However, most current DL-MRI approaches depend on the availability of ground truth data, which is generally infeasible or impractical to acquire due to various constraints such as organ motion. In this work, we tackle this issue by proposing a physics-based self-supervised DL approach, where we split acquired measurements into two sets. The first one is used for data consistency while training the network, while the second is used to define the loss. The proposed technique enables training of high-quality DL-MRI reconstruction without fully-sampled data.
This abstract and the presentation materials are available to members only; a login is required.