Lights, camera, and domain shift: using superpixels for domain generalization in image segmentation for multimodal endoscopies
Citation
Share
Date
Abstract
Deep Learning models have made great advancements in image processing. Their ability to identify key parts of images and provide fast and accurate segmentation has been proven and used in many fields, such as city navigation and object recognition. However, there is one field that is both in need of the extra information that computers can provide and has proven elusive for the goals of robustness and accuracy: Medicine. In the medical field, limitations in the amount of data and in the variation introduced by factors such as differences in instrumentation introduce a grave threat to the accuracy of a model known as domain shift. Domain shift occurs when we train with data that has a set of characteristics that is not wholly representative of the entire set of data a task encompasses. When it is present, models that have no tools to deal with it can observe a degradation to their accuracy to such degree that they can be transformed from usable to useless. To better explore this topic, we discuss two techniques: Domain adaptation, where we find how to make a model better at predicting for specific domain of data inside a task, and Domain generalization, where we find how to make a model better at predicting data for any domain inside a task. In addition, we discuss several image segmentation models that have shown good results for medical tasks: U-Net, Attention U-Net, DeepLab, Efficient U-Net, and EndoUDA. Following this exploration, we propose a solution model based on a domain generalization technique: Patch-based consistency. We use a superpixel generator known as SLIC (Simple Linear Iterative Clustering) to provide low-level, domain-agnostic information to different models in order to encourage our networks to learn more global features. This framework, which we refer to as SUPRA (SUPeRpixel Augmented), is used in tandem with U-Net, Attention U-Net, and Efficient U-Net in an effort to improve results in endoscopies where light modalities are switched: Something commonly seen in lesion detection tasks (particularly in Barrett's Esophagus and Polyp detection). We find that the best of these models, SUPRA-UNet, shows significant qualities that make it a better choice than unaugmented networks for lesion detection: Not only does it provide less noisy and smoother predictions, but it outperforms other networks by over 20% IoU versus the best results (U-Net) in a target domain that presents significant lighting differences from the training set.
Collections
Document viewer
Since the file exceeds 25 MB, to view the file it must be downloaded.