Ciencias Exactas y Ciencias de la Salud
Permanent URI for this collectionhttps://hdl.handle.net/11285/551039
Pertenecen a esta colección Tesis y Trabajos de grado de las Maestrías correspondientes a las Escuelas de Ingeniería y Ciencias así como a Medicina y Ciencias de la Salud.
Browse
Search Results
- Neuroimaging-based pain detector using artificial intelligence approaches(Instituto Tecnológico y de Estudios Superiores de Monterrey, 2025-06-17) Macías Padilla, Brayhan Alan; Hernández Rojas, Luis Guillermo; emipsanchez; Ochoa Ruiz, Gilberto; Mendoza Montoya,Omar; Chailloux Peguero, Juan David; School of Engineering and Sciences; Campus Monterrey; Antelis Ortiz, Javier MauricioChronic pain is a complex, multifactorial experience that varies significantly across time, sex, and individual physiology. This thesis presents the development of a deep learning-based sys- tem for classifying pain-related brain activity using functional magnetic resonance imaging (fMRI) from a rodent model of a comorbid pain condition (masseter muscle inflammation fol- lowed by stress) that induces chronic visceral pain hypersensitivity (CPH). The proposed sys- tem evaluates the potential ofconvolutional neural networks (CNNs) to detect pain-associated neural patterns under different experimental conditions.Three variations of the VGG16 architecture were implemented and tested: a modified 2D VGG16 adapted to 3D volumes, a multiview 2D ensemble (M2D) fed with axial, sagittal, and coronal slices, and a fully 3D VGG16 model. After an initial benchmarking phase using data from rest sessions, the 3D VGG16 model was selected for subsequent experiments due to its consistent performance and the ability to learn from full volumetric input.Classification tasks involved multiple comparison scenarios, including sex differences, longitudinal progression of pain (from baseline to weeks 1 and week 7 after the CPH pro- cedure), and the impact of data selection strategies (full rest sessions vs. distension-specific volume extraction). Grad-CAM was used to provide anatomical interpretation of model at- tention, revealing consistent activation of pain-related brain regions such as the insular cortex, somatosensory cortex, thalamic nuclei, and prelimbic area, with marked differences observed between male and female subjects.The results demonstrate the feasibility of using deep neural networks, combined with explainable AI techniques, to decode and interpret pain-related patterns in fMRI data. Fur- thermore, the performance trends observed in classification tasks align with behavioral find- ings reported in the literature, supporting the potential of AI-driven neuroimaging analysis to uncover meaningful biological signatures of chronic pain.This study builds directly upon the work conducted by Da Silva et. al. [1], who previ- ously processed the same dataset to generate VMR representations and statistical t-maps from fMRI data. His analysis focused on identifying regions with significant activation differences between conditions using traditional statistical parametric mapping. Expanding on this foun- dation, the present research integrates deep learning methods, specifically 3D convolutional neural networks (CNNs), to classify experimental conditions directly from the fMRI volumes. Moreover, it incorporates explainable AI techniques (Grad-CAM) to reveal the spatial patterns most influential to classification. This approach offers a shift from region-centric hypothesis testing toward a data-driven, whole-brain interpretability framework, enabling the detection of distributed neural patterns that might not reach statistical significance individually but are collectively informative.
- Multimodal data fusion algorithm for image classification(Instituto Tecnológico y de Estudios Superiores de Monterrey, 2024-11) Beder Sabag, Taleb; Vargas Rosales, César; emipsanchez; Pérez García, Benjamín de Jesús; School of Engineering and Sciences; Campus MonterreyIImage classification algorithms are a tool that can be implemented on a variety of research sectors, some of these researches need an extensive amount of data for the model to obtain appropriate results. A work around this problem is to implement a multimodal data fusion algorithm, a model that utilizes data from different acquisition frameworks to complement for the missing data. In this paper, we discuss about the generation of a CNN model for image classification using transfer learning from three types of architectures in order to compare their results and use the best model, we also implement a Spatial Pyramid Pooling layer to be able to use images with varying dimensions. The model is then tested on three uni-modal data-sets to analyze its performance and tune the hyperparameters of the model according to the results. Then we use the optimized architecture and hyperparameters to train a model on a multimodal data-set. The aim of this thesis is to generate a multimodal image classification model that can be used by researchers and people that need to analyze images for their own cause, avoiding the need to implement a model for a specific study.
- Segmentación semántica de dibujos de retrato de animé utilizando una CNN con la arquitectura U-Net y Mobile-Net como codificador(Instituto Tecnológico y de Estudios Superiores de Monterrey, 2024-05-25) Juárez López, Gerardo César; Falcón, Luis; emimmayorquin; Sanchez, Gildardo; Hinojosa, Salvador; Escuela de Ingeniería y Ciencias; Campus GuadalajaraLa segmentación de imágenes es un proceso de visión computacional utilizado en múltiples áreas del conocimiento; desde análisis médico de imágenes, hasta usos en moda y comercio. Además, con las nuevas tecnologías de inteligencia artificial (IA), en especial en las redes generativas, se han podido utilizar diversas bases de datos de imágenes segmentadas en nuevas inimaginables aplicaciones con el objetivo de facilitar la creación de contenido visual y artístico. Este último depende mucho de amplias bases de datos para lograr un mejor entrenamiento a redes generativas, lo cual limita la oportunidad de nuevos desarrollos de IA. Es por ello que el presente trabajo tiene como propósito crear una herramienta que pueda generar bases de datos de imágenes segmentadas a partir de imágenes de retratos de animé, con el propósito de abrir nuevas oportunidades la creación de contenido del mismo tipo a través de redes generativas.
- Deep Learning Approach for Alzheimer’s Disease Classification: Integrating Multimodal MRI and FDG- PET Imaging Through Dual Feature Extractors and Shared Neural Network Processing(Instituto Tecnológico y de Estudios Superiores de Monterrey, 2024) Vega Guzmán, Sergio Eduardo; Alfaro Ponce, Mariel; emimmayorquin; Ochoa Ruíz, Gilberto; Chairez Oria, Jorge Isaac; Hernandez Sanchez, Alejandra; School of Engineering and Sciences; Campus Monterrey; Ramírez Nava, Gerardo JuliánAlzheimer’s disease (AD) is a progressive neurodegenerative disorder whose incidence is expected to grow in the coming years. Traditional diagnostic methods, such as MRI and FDG-PET, each provide valuable but limited insights into the disease’s pathology. This thesis researches the potential of a multimodal deep learning classifier to improve the diagnostic accuracy of AD by integrating MRI and FDG-PET imaging data in comparison to single modality implementations. The study proposes a lightweight neural architecture that uses the strengths of both imaging modalities, aiming to reduce computational costs while maintaining state-of-the-art diagnostic performance. The proposed model utilizes two pre-trained feature extractors, one for each imaging modality, fine-tuned to capture the relevant features from the dataset. The outputs of these extractors are fused into a single vector to form an enriched feature map that better describes the brain. Experimental results demonstrate that the multimodal classifier outperforms single modality classifiers, achieving an overall accuracy of 90% on the test dataset. The VGG19 model was the best feature extractor for both MRI and PET data since it showed superior performance when compared to the other experimental models, with an accuracy of 71.9% for MRI and 80.3% for PET images. The multimodal implementation also exhibited higher precision, recall, and F1 scores than the single-modality implementations. For instance, it achieved a precision of 0.90, recall of 0.94, and F1-score of 0.92 for the AD class and a precision of 0.89, recall of 0.82, and F1-score of 0.86 for the CN class. Furthermore, explainable AI techniques provided insights into the model’s decisionmaking process, revealing that it effectively utilizes both structural and metabolic information to distinguish between AD and cognitively normal (CN) subjects. This research adds supporting evidence into the potential of multimodal imaging and machine learning to enhance early detection and diagnosis of Alzheimer’s disease, offering a cost-effective solution suitable for widespread clinical applications.
- VGG-16 para detección de COVID-19 y pulmonía en radiografías de tórax(Instituto Tecnológico y de Estudios Superiores de Monterrey, 2023-11) Pérez Durán, Luis Arturo; Falcón Morales, Luis Eduardo; emimmayorquin; Corona Burgueño, Juan Francisco; Sánchez Ante, Gildardo; School of Engineering and Sciences; Campus GuadalajaraLa principal problemática que trajo la pandemia ocasionada por el virus de SARS-CoV-2, fue la necesidad de determinar rápidamente y de manera efectiva si un paciente se encontraba infectado. Los principales métodos de detección son la prueba PCR y la prueba de antígenos, una de estas pruebas es muy segura, pero puede durar hasta 3 días en dar el resultado, mientras que la otra prueba tiene resultados en minutos, pero puede dar resultados erróneos dependiendo del tiempo en el que se hizo la prueba durante la enfermedad. La presente tesis busca explorar de manera cuantitativa que tan efectivo es un modelo de red neuronal convolucional VGG-16 para el diagnóstico de enfermedades pulmonares, siendo estas COVID-19 y neumonía, así como también el posible diagnóstico de normalidad en radiografías de tórax. Para la elaboración y entrenamiento del modelo, se utiliza una base de datos gratuita que contiene 15000 imágenes. Adicionalmente se revisan diferentes trabajos previos que tratan igualmente de la clasificación de imágenes médicas con el uso de diferentes modelos de red neuronal convolucional. Se expone cómo funciona una red neuronal artificial y una red neuronal convolucional además de revisar la estructura del VGG-16, que fue el modelo seleccionado para esta tesis. Para la elaboración del modelo, se crearon 4 variantes, donde el primer modelo clasifica de manera binaria si el paciente es sano o está infectado con COVID-19, mientras que el segundo modelo clasifica tres categorías, siendo COVID-19, normalidad o neumonía. La otra variación en los modelos es en el entrenamiento, modificando algunos parámetros y cambiando el tamaño de las imágenes que se usan para el aprendizaje, utilizando una versión de imágenes de 128 x 128 pixeles y otra de 224 x 224 pixeles. En conclusión, considerando los datos obtenidos de los 4 variaciones, los modelos entrenados con las imágenes de 128 x 128 pixeles obtienen mejores resultados en comparación con los modelos entrenados con imágenes de 224 x 224 pixeles, logrando un mayor porcentaje de predicciones correctas con las imágenes de prueba. The main problem brought by the pandemic caused by the SARS-CoV-2 virus was the need to quickly and effectively determine whether a patient was infected. The main detection methods are the PCR test and the antigen test, one of these tests is very safe, but can take up to 3 days to give the result, while the other test has results in minutes, but it can give erroneous results depending on the time in which the test was done during the illness. This thesis seeks to quantitatively explore how effective a VGG-16 convolutional neural network model is for the diagnosis of lung diseases, these being COVID-19 and pneumonia, as well as the possible diagnosis of normality in chest x-rays. For the development and training of the model, a free database containing 15000 images is used. In addition, different previous works that also have classification of medical images with the use of different convolutional neural network models are reviewed. How an artificial neural network and a convolutional neural network work are explained, in addition to reviewing the structure of the VGG-16, which was the model selected for this thesis. To develop the model, 4 variants were created, the first model classifies in a binary way whether the patient is healthy or infected with COVID-19, while the second model classifies three categories, being COVID-19, normality or pneumonia. The other variation of the models is in the training, modifying some parameters and changing the size of the images used for learning, using a version of 128 x 128 pixels and another of 224 x 224 pixels. In conclusion, considering the data obtained from the 4 variations, the models trained with the 128 x 128 pixel images obtained better results compared to the models trained with 224 x 224 pixel images, achieving a higher percentage of correct predictions using the test images.

