We propose a stacked U-NET architecture to automatically segment the tongue, velum, and airway in speech MRI based on hybrid learning. Three separate U-nets are trained to learn the mapping between the input image and their specific articulator. The two U-NETs to segment the velum, and tongue are based on transfer learning, where we leverage open-source brain MRI segmentation. The third U-NET for airway segmentation is based on classical training methods. We demonstrate the utility of our approach by comparing against manual segmentations.
This abstract and the presentation materials are available to members only; a login is required.