In this work we introduce RadixNet, a fast, scalable, transform network architecture based on the Cooley-Tukey FFT, and use it in a fully-learnt iterative reconstruction with a residual dense U-Net image regularization. Results show that fast transform networks can be trained at 256x256 dimensions and outperform the FFT.
This abstract and the presentation materials are available to members only; a login is required.