Ciencias Exactas y Ciencias de la Salud

Permanent URI for this collectionhttps://hdl.handle.net/11285/551039

Pertenecen a esta colección Tesis y Trabajos de grado de las Maestrías correspondientes a las Escuelas de Ingeniería y Ciencias así como a Medicina y Ciencias de la Salud.

Browse

Search Results

Now showing 1 - 3 of 3
  • Tesis de maestría
    Neuroimaging-based pain detector using artificial intelligence approaches
    (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2025-06-17) Macías Padilla, Brayhan Alan; Hernández Rojas, Luis Guillermo; emipsanchez; Ochoa Ruiz, Gilberto; Mendoza Montoya,Omar; Chailloux Peguero, Juan David; School of Engineering and Sciences; Campus Monterrey; Antelis Ortiz, Javier Mauricio
    Chronic pain is a complex, multifactorial experience that varies significantly across time, sex, and individual physiology. This thesis presents the development of a deep learning-based sys- tem for classifying pain-related brain activity using functional magnetic resonance imaging (fMRI) from a rodent model of a comorbid pain condition (masseter muscle inflammation fol- lowed by stress) that induces chronic visceral pain hypersensitivity (CPH). The proposed sys- tem evaluates the potential ofconvolutional neural networks (CNNs) to detect pain-associated neural patterns under different experimental conditions.Three variations of the VGG16 architecture were implemented and tested: a modified 2D VGG16 adapted to 3D volumes, a multiview 2D ensemble (M2D) fed with axial, sagittal, and coronal slices, and a fully 3D VGG16 model. After an initial benchmarking phase using data from rest sessions, the 3D VGG16 model was selected for subsequent experiments due to its consistent performance and the ability to learn from full volumetric input.Classification tasks involved multiple comparison scenarios, including sex differences, longitudinal progression of pain (from baseline to weeks 1 and week 7 after the CPH pro- cedure), and the impact of data selection strategies (full rest sessions vs. distension-specific volume extraction). Grad-CAM was used to provide anatomical interpretation of model at- tention, revealing consistent activation of pain-related brain regions such as the insular cortex, somatosensory cortex, thalamic nuclei, and prelimbic area, with marked differences observed between male and female subjects.The results demonstrate the feasibility of using deep neural networks, combined with explainable AI techniques, to decode and interpret pain-related patterns in fMRI data. Fur- thermore, the performance trends observed in classification tasks align with behavioral find- ings reported in the literature, supporting the potential of AI-driven neuroimaging analysis to uncover meaningful biological signatures of chronic pain.This study builds directly upon the work conducted by Da Silva et. al. [1], who previ- ously processed the same dataset to generate VMR representations and statistical t-maps from fMRI data. His analysis focused on identifying regions with significant activation differences between conditions using traditional statistical parametric mapping. Expanding on this foun- dation, the present research integrates deep learning methods, specifically 3D convolutional neural networks (CNNs), to classify experimental conditions directly from the fMRI volumes. Moreover, it incorporates explainable AI techniques (Grad-CAM) to reveal the spatial patterns most influential to classification. This approach offers a shift from region-centric hypothesis testing toward a data-driven, whole-brain interpretability framework, enabling the detection of distributed neural patterns that might not reach statistical significance individually but are collectively informative.
  • Tesis de maestría / master thesis
    Multimodal data fusion algorithm for image classification
    (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2024-11) Beder Sabag, Taleb; Vargas Rosales, César; emipsanchez; Pérez García, Benjamín de Jesús; School of Engineering and Sciences; Campus Monterrey
    IImage classification algorithms are a tool that can be implemented on a variety of research sectors, some of these researches need an extensive amount of data for the model to obtain appropriate results. A work around this problem is to implement a multimodal data fusion algorithm, a model that utilizes data from different acquisition frameworks to complement for the missing data. In this paper, we discuss about the generation of a CNN model for image classification using transfer learning from three types of architectures in order to compare their results and use the best model, we also implement a Spatial Pyramid Pooling layer to be able to use images with varying dimensions. The model is then tested on three uni-modal data-sets to analyze its performance and tune the hyperparameters of the model according to the results. Then we use the optimized architecture and hyperparameters to train a model on a multimodal data-set. The aim of this thesis is to generate a multimodal image classification model that can be used by researchers and people that need to analyze images for their own cause, avoiding the need to implement a model for a specific study.
  • Tesis de maestría / master thesis
    Deep Learning Approach for Alzheimer’s Disease Classification: Integrating Multimodal MRI and FDG- PET Imaging Through Dual Feature Extractors and Shared Neural Network Processing
    (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2024) Vega Guzmán, Sergio Eduardo; Alfaro Ponce, Mariel; emimmayorquin; Ochoa Ruíz, Gilberto; Chairez Oria, Jorge Isaac; Hernandez Sanchez, Alejandra; School of Engineering and Sciences; Campus Monterrey; Ramírez Nava, Gerardo Julián
    Alzheimer’s disease (AD) is a progressive neurodegenerative disorder whose incidence is expected to grow in the coming years. Traditional diagnostic methods, such as MRI and FDG-PET, each provide valuable but limited insights into the disease’s pathology. This thesis researches the potential of a multimodal deep learning classifier to improve the diagnostic accuracy of AD by integrating MRI and FDG-PET imaging data in comparison to single modality implementations. The study proposes a lightweight neural architecture that uses the strengths of both imaging modalities, aiming to reduce computational costs while maintaining state-of-the-art diagnostic performance. The proposed model utilizes two pre-trained feature extractors, one for each imaging modality, fine-tuned to capture the relevant features from the dataset. The outputs of these extractors are fused into a single vector to form an enriched feature map that better describes the brain. Experimental results demonstrate that the multimodal classifier outperforms single modality classifiers, achieving an overall accuracy of 90% on the test dataset. The VGG19 model was the best feature extractor for both MRI and PET data since it showed superior performance when compared to the other experimental models, with an accuracy of 71.9% for MRI and 80.3% for PET images. The multimodal implementation also exhibited higher precision, recall, and F1 scores than the single-modality implementations. For instance, it achieved a precision of 0.90, recall of 0.94, and F1-score of 0.92 for the AD class and a precision of 0.89, recall of 0.82, and F1-score of 0.86 for the CN class. Furthermore, explainable AI techniques provided insights into the model’s decisionmaking process, revealing that it effectively utilizes both structural and metabolic information to distinguish between AD and cognitively normal (CN) subjects. This research adds supporting evidence into the potential of multimodal imaging and machine learning to enhance early detection and diagnosis of Alzheimer’s disease, offering a cost-effective solution suitable for widespread clinical applications.
En caso de no especificar algo distinto, estos materiales son compartidos bajo los siguientes términos: Atribución-No comercial-No derivadas CC BY-NC-ND http://www.creativecommons.mx/#licencias
logo

El usuario tiene la obligación de utilizar los servicios y contenidos proporcionados por la Universidad, en particular, los impresos y recursos electrónicos, de conformidad con la legislación vigente y los principios de buena fe y en general usos aceptados, sin contravenir con su realización el orden público, especialmente, en el caso en que, para el adecuado desempeño de su actividad, necesita reproducir, distribuir, comunicar y/o poner a disposición, fragmentos de obras impresas o susceptibles de estar en formato analógico o digital, ya sea en soporte papel o electrónico. Ley 23/2006, de 7 de julio, por la que se modifica el texto revisado de la Ley de Propiedad Intelectual, aprobado

DSpace software copyright © 2002-2026

Licencia