The \(\ell^2\) norm is the default loss function for complex image reconstruction. In this work, we investigate the behavior of the \(\ell^1\) and \(\ell^2\) loss functions for complex image reconstruction with non-complex-valued models. Simulations show that these norms assign a lower loss to reconstructions with lower magnitude, introducing an asymmetry in the loss function. To address this, we propose a new, symmetric loss function, and train deep learning models to show that the proposed loss function achieves better performance and faster convergence on complex image reconstruction tasks.
This abstract and the presentation materials are available to members only; a login is required.