There is a lack of pre-trained deep learning model weights on large scale medical image dataset, due to privacy concerns. Federated learning enables training deep networks while preserving privacy. This work explored co-training multi-task models on multiple heterogeneous datasets, and validated the usage of federated learning could serve the purpose of pre-trained weights for downstream tasks.
This abstract and the presentation materials are available to members only; a login is required.