Ciencias Exactas y Ciencias de la Salud
Permanent URI for this collectionhttps://hdl.handle.net/11285/551039
Pertenecen a esta colección Tesis y Trabajos de grado de las Maestrías correspondientes a las Escuelas de Ingeniería y Ciencias así como a Medicina y Ciencias de la Salud.
Browse
Search Results
- Image captioning for automated grading and understanding of pre-cancerous inflammations in ulcerative colitis on endoscopic images(Instituto Tecnológico y de Estudios Superiores de Monterrey, 2024) Valencia Velarde, Flor Helena; Ochoa Ruiz, Gilberto; emimmayorquin; Hinojosa Cervantes, Salvador Miguel; Gonzalez Mendoza, Miguel; School of Engineering and Sciences; Campus Monterrey; Ali, SharibThis thesis presents the development and results of an automated system for grading and understanding ulcerative colitis (UC) through image captioning. UC is a chronic inflammatory disease of the large intestine, characterized by alternating periods of remission and relapse. The conventional method for assessing UC severity involves the Mayo Endoscopic Scoring (MES) system, which depends on the visual evaluation of mucosal characteristics. This method is subjective and can result in considerable variability between different observers. The primary objective of this thesis is to investigate and evaluate contemporary methodologies for developing an image captioning model that can generate MES scores and descriptive captions for mucosal features observed in endoscopic images. This research involved an extensive examination of various convolutional neural networks (CNNs) for visual feature extraction and the implementation of several sequence models for natural language processing (NLP), including Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), and Recurrent Neural Networks (RNNs). Our system was rigorously evaluated on a dataset consisting of 982 images obtained from both public repositories and proprietary collections. The combination of DenseNet121 for CNN-based feature extraction and 2 layers GRU for sequence generation yielded the best performance, achieving a BLEU-4 score of 0.7352. This high level of similarity between the reference and predicted captions indicates the model’s effectiveness in accurately capturing and describing critical mucosal features necessary for UC grading. While our system performed well in predicting MES-0 to MES-2 categories, it encountered challenges in accurately predicting MES-3 classifications. This discrepancy is likely due to the underrepresentation of severe cases in the training dataset. Despite this limitation, the system’s ability to generate comprehensive descriptions of mucosal features represents a significant advancement in the automated evaluation of UC. The contributions of this thesis include the creation of a dataset for UC captioning task, a detailed analysis of various CNN architectures and sequence models, an extensive evaluation of their performance, and the development of a robust framework for automated UC grading and description generation. Our findings suggest that combining advanced visual feature extraction techniques with sophisticated NLP models can significantly improve the accuracy and reliability of automated medical diagnosis systems. By reducing inter-observer variability and providing a valuable tool for training new clinicians, this automated grading and captioning system has the potential to enhance diagnostic accuracy and clinical decision-making in UC management. This work represents a substantial step forward in the field of endoscopic imaging, underscoring the importance of integrating machine learning techniques in clinical practice. Additionally, by generating detailed descriptions, this approach helps mitigate the “black box” nature of deep learning, offering more transparency and interpretability in automated medical diagnoses.
- ANOSCAR: An image captioning model and dataset designed from OSCAR and the video dataset of activitynet(Instituto Tecnológico y de Estudios Superiores de Monterrey, 2021-07-01) Byrd Suárez, Emmanuel; GONZALEZ MENDOZA, MIGUEL; 123361; González Mendoza, Miguel; puemcuervo; Ochoa Ruiz, Gilberto; Marín Hernandez, Antonio; School of Engineering and Sciences; Campus Estado de México; Chang Fernández, LeonardoActivity Recognition and Classification in video sequences is an area of research that has received attention recently. However, video processing is computationally expensive, and its advances have not been as extraordinary compared to those of Image Captioning. This work uses a computationally limited environment and learns an Image Captioning transformation of the ActivityNet-Captions Video Dataset that can be used for either Video Captioning or Video Storytelling. Different Data Augmentation techniques for Natural Language Processing are explored and applied to the generated dataset in an effort to increase its validation scores. Our proposal includes an Image Captioning dataset obtained from ActivityNet with its features generated by Bottom-Up attention and a model to predict its captions, generated with OSCAR. Our captioning scores are slightly better than those of S2VT, but with a much simpler pipeline, showing a starting point for future research using our approach, which can be used for either Video Captioning or Video Storytelling. Finally, we propose different lines of research to how this work can be further expanded and improved.

