Maximum likelihood estimation is challenging in multicompartmental models due to the degeneracy of the optimization landscape. As a result, machine learning (ML) methods are often applied for parameter estimation, interpolating the mapping of measurements to model parameters. Such mapping can essentially depend on the training set (prior), decreasing the sensitivity to the measurements, and yielding artificially “clean” maps. Here we quantify the effect of the training set on the Standard Model of diffusion in white matter as function of signal-to-noise ratio, in simulations and in vivo.
This abstract and the presentation materials are available to members only; a login is required.