In the machine-learning (ML) era, we are transitioning from max-likelihood parameter estimation to learning the mapping from measurements to model parameters. While such maps look smooth, there is danger of them becoming too smooth: At low SNR, ML estimates become the mean of the training set. Here we derive fit quality (MSE) as function of SNR, and show that MSE for various ML methods (regression, neural-nets, random forest) approaches a universal curve interpolating between Cramér-Rao bound at high SNR, and variance of the prior at low SNR. Theory is validated numerically and on white matter Standard Model in vivo.
This abstract and the presentation materials are available to members only; a login is required.