Ciencias Exactas y Ciencias de la Salud
Permanent URI for this collectionhttps://hdl.handle.net/11285/551039
Pertenecen a esta colección Tesis y Trabajos de grado de las Maestrías correspondientes a las Escuelas de Ingeniería y Ciencias así como a Medicina y Ciencias de la Salud.
Browse
Search Results
- Deep Learning Approach for Alzheimer’s Disease Classification: Integrating Multimodal MRI and FDG- PET Imaging Through Dual Feature Extractors and Shared Neural Network Processing(Instituto Tecnológico y de Estudios Superiores de Monterrey, 2024) Vega Guzmán, Sergio Eduardo; Alfaro Ponce, Mariel; emimmayorquin; Ochoa Ruíz, Gilberto; Chairez Oria, Jorge Isaac; Hernandez Sanchez, Alejandra; School of Engineering and Sciences; Campus Monterrey; Ramírez Nava, Gerardo JuliánAlzheimer’s disease (AD) is a progressive neurodegenerative disorder whose incidence is expected to grow in the coming years. Traditional diagnostic methods, such as MRI and FDG-PET, each provide valuable but limited insights into the disease’s pathology. This thesis researches the potential of a multimodal deep learning classifier to improve the diagnostic accuracy of AD by integrating MRI and FDG-PET imaging data in comparison to single modality implementations. The study proposes a lightweight neural architecture that uses the strengths of both imaging modalities, aiming to reduce computational costs while maintaining state-of-the-art diagnostic performance. The proposed model utilizes two pre-trained feature extractors, one for each imaging modality, fine-tuned to capture the relevant features from the dataset. The outputs of these extractors are fused into a single vector to form an enriched feature map that better describes the brain. Experimental results demonstrate that the multimodal classifier outperforms single modality classifiers, achieving an overall accuracy of 90% on the test dataset. The VGG19 model was the best feature extractor for both MRI and PET data since it showed superior performance when compared to the other experimental models, with an accuracy of 71.9% for MRI and 80.3% for PET images. The multimodal implementation also exhibited higher precision, recall, and F1 scores than the single-modality implementations. For instance, it achieved a precision of 0.90, recall of 0.94, and F1-score of 0.92 for the AD class and a precision of 0.89, recall of 0.82, and F1-score of 0.86 for the CN class. Furthermore, explainable AI techniques provided insights into the model’s decisionmaking process, revealing that it effectively utilizes both structural and metabolic information to distinguish between AD and cognitively normal (CN) subjects. This research adds supporting evidence into the potential of multimodal imaging and machine learning to enhance early detection and diagnosis of Alzheimer’s disease, offering a cost-effective solution suitable for widespread clinical applications.
- Mining contrast patterns from multivariate decision trees(Instituto Tecnológico y de Estudios Superiores de Monterrey, 2018) Cañete Sifuentes, Leonardo Mauricio; Monroy, Raúl; emimmayorquin; Jair Escalante, Hugo; Conant Pablos, Santiago Enrique; Loyola González, Octavio; Escuela de Ingeniería y Ciencias; Campus Estado de México; Medina Pérez, Miguel AngelCurrently, there is a growing interest in the development of classifiers based on contrast patterns (CPs); this is partly due to the advantage of them being able to explain a classification result in a language that is easy to understand for an expert. Thorough experiments show that CP- based classifiers, when using contrast patterns extracted by miners based on decision trees, attain accuracies comparable with state-of-the-art classifiers like SVM, k-NN, C4.5, Bagging and Boosting. Existing decision tree-based miners use Univariate Decision Trees (UDTs) to extract CPs. For tree-based classification classifiers based on Multivariate Decision Trees (MDTs) achieve better accuracy than those based on UDTs. This result might be attributable to that MDTs use multivariate relations (e.g., 2height + 3weight > 40) which, in some cases, separate better the classes than the univariate relations (e.g., age > 40) that UDTs use. Our hypothesis runs parallel, but for CP-based classification: using CPs extracted from MDT-based miners, which we call multivariate contrast patterns, a CP-based classifier shall significantly improve on the performance of others based on UDTs. We propose an algorithm to extract, simplify and filter multivariate CPs. We make an empirical study of our proposed algorithm. We use 112 datasets, taking half of the datasets for tuning the parameters of our algorithm. To validate our hypotheses, we use the other half of the datasets as a testing set to compare our algorithm against other state-of-the-art CP miners in terms of quality, and against other state-of-the-art classifiers, in terms of classification performance. The results obtained in the testing set show that the quality of multivariate CPs, in terms of Jaccard, is significantly higher than that of CPs extracted through UDTs (univariate CPs). We also show that the classification results for CP-based classifiers are significantly better when using multivariate CPs than when using univariate CPs; which could be explained by the higher quality of multivariate CPs. The classification results for multivariate CP-based classifiers are also competitive with non-pattern-based state-of-the-art classifiers. Yet, the plus is that multivariate CP-based classifiers provide contrast patterns, which are abstract-level explanations that could help an expert to gain insights in the problem under investigation.

