It is common practice to use dropout layers on U-net segmentation deep-learning models, and it is usually desirable to measure uncertainty of a deployed model while inferencing in clinical scenario. We present a method to convert a pre-trained model to a Bayesian model that can estimate uncertainty by posterior distribution of its trained weights. Our method uses both regular dropouts and converted Monte-Carlo dropouts to estimate uncertainty via cosine similarity of fixed and stochastic predictions. It can identify cases differing from training set by assigning high uncertainty and can be used to ask for human intervention with tough cases.
This abstract and the presentation materials are available to members only; a login is required.