Image captioning for automated grading and understanding of pre-cancerous inflammations in ulcerative colitis on endoscopic images
Citation
Share
Abstract
This thesis presents the development and results of an automated system for grading and understanding ulcerative colitis (UC) through image captioning. UC is a chronic inflammatory disease of the large intestine, characterized by alternating periods of remission and relapse. The conventional method for assessing UC severity involves the Mayo Endoscopic Scoring (MES) system, which depends on the visual evaluation of mucosal characteristics. This method is subjective and can result in considerable variability between different observers. The primary objective of this thesis is to investigate and evaluate contemporary methodologies for developing an image captioning model that can generate MES scores and descriptive captions for mucosal features observed in endoscopic images. This research involved an extensive examination of various convolutional neural networks (CNNs) for visual feature extraction and the implementation of several sequence models for natural language processing (NLP), including Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), and Recurrent Neural Networks (RNNs). Our system was rigorously evaluated on a dataset consisting of 982 images obtained from both public repositories and proprietary collections. The combination of DenseNet121 for CNN-based feature extraction and 2 layers GRU for sequence generation yielded the best performance, achieving a BLEU-4 score of 0.7352. This high level of similarity between the reference and predicted captions indicates the model’s effectiveness in accurately capturing and describing critical mucosal features necessary for UC grading. While our system performed well in predicting MES-0 to MES-2 categories, it encountered challenges in accurately predicting MES-3 classifications. This discrepancy is likely due to the underrepresentation of severe cases in the training dataset. Despite this limitation, the system’s ability to generate comprehensive descriptions of mucosal features represents a significant advancement in the automated evaluation of UC. The contributions of this thesis include the creation of a dataset for UC captioning task, a detailed analysis of various CNN architectures and sequence models, an extensive evaluation of their performance, and the development of a robust framework for automated UC grading and description generation. Our findings suggest that combining advanced visual feature extraction techniques with sophisticated NLP models can significantly improve the accuracy and reliability of automated medical diagnosis systems. By reducing inter-observer variability and providing a valuable tool for training new clinicians, this automated grading and captioning system has the potential to enhance diagnostic accuracy and clinical decision-making in UC management. This work represents a substantial step forward in the field of endoscopic imaging, underscoring the importance of integrating machine learning techniques in clinical practice. Additionally, by generating detailed descriptions, this approach helps mitigate the “black box” nature of deep learning, offering more transparency and interpretability in automated medical diagnoses.
Description
https://orcid.org/0000-0002-9896-8727