Deep Learning Approach for Alzheimer’s Disease Classification: Integrating Multimodal MRI and FDG- PET Imaging Through Dual Feature Extractors and Shared Neural Network Processing

dc.audience.educationlevelPúblico en general/General public
dc.audience.educationlevelInvestigadores/Researchers
dc.audience.educationlevelEstudiantes/Students
dc.audience.educationlevelOtros/Other
dc.contributor.advisorAlfaro Ponce, Mariel
dc.contributor.authorVega Guzmán, Sergio Eduardo
dc.contributor.catalogeremimmayorquin
dc.contributor.committeememberOchoa Ruíz, Gilberto
dc.contributor.committeememberChairez Oria, Jorge Isaac
dc.contributor.committeememberHernandez Sanchez, Alejandra
dc.contributor.departmentSchool of Engineering and Scienceses_MX
dc.contributor.institutionCampus Monterreyes_MX
dc.contributor.mentorRamírez Nava, Gerardo Julián
dc.date.accepted2024-06-10
dc.date.accessioned2025-05-08T22:21:30Z
dc.date.issued2024
dc.descriptionhttps://orcid.org/0000-0002-4270-0350
dc.description.abstractAlzheimer’s disease (AD) is a progressive neurodegenerative disorder whose incidence is expected to grow in the coming years. Traditional diagnostic methods, such as MRI and FDG-PET, each provide valuable but limited insights into the disease’s pathology. This thesis researches the potential of a multimodal deep learning classifier to improve the diagnostic accuracy of AD by integrating MRI and FDG-PET imaging data in comparison to single modality implementations. The study proposes a lightweight neural architecture that uses the strengths of both imaging modalities, aiming to reduce computational costs while maintaining state-of-the-art diagnostic performance. The proposed model utilizes two pre-trained feature extractors, one for each imaging modality, fine-tuned to capture the relevant features from the dataset. The outputs of these extractors are fused into a single vector to form an enriched feature map that better describes the brain. Experimental results demonstrate that the multimodal classifier outperforms single modality classifiers, achieving an overall accuracy of 90% on the test dataset. The VGG19 model was the best feature extractor for both MRI and PET data since it showed superior performance when compared to the other experimental models, with an accuracy of 71.9% for MRI and 80.3% for PET images. The multimodal implementation also exhibited higher precision, recall, and F1 scores than the single-modality implementations. For instance, it achieved a precision of 0.90, recall of 0.94, and F1-score of 0.92 for the AD class and a precision of 0.89, recall of 0.82, and F1-score of 0.86 for the CN class. Furthermore, explainable AI techniques provided insights into the model’s decisionmaking process, revealing that it effectively utilizes both structural and metabolic information to distinguish between AD and cognitively normal (CN) subjects. This research adds supporting evidence into the potential of multimodal imaging and machine learning to enhance early detection and diagnosis of Alzheimer’s disease, offering a cost-effective solution suitable for widespread clinical applications.es_MX
dc.description.degreeMaster of Science in Computer Science
dc.format.mediumTextoes_MX
dc.identificator331499||120304
dc.identifier.citationVega Guzman, S. E. (2024). Deep learning approach for Alzheimer’s disease classification: Integrating multimodal MRI and FDG-PET imaging through dual feature extractors and shared neural network processing. [Tesis maestría] Instituto Tecnologico y de Estudios Superiores de Monterrey. Recuperado de: https://hdl.handle.net/11285/703629
dc.identifier.cvu1239365es_MX
dc.identifier.orcidhttps://orcid.org/0009-0002-2857-317X
dc.identifier.urihttps://hdl.handle.net/11285/703629
dc.language.isoenges_MX
dc.publisherInstituto Tecnológico y de Estudios Superiores de Monterreyes_MX
dc.relationInstituto Tecnológico y de Estudios Superiores de Monterrey
dc.relationCONAHCYT
dc.relation.isFormatOfpublishedVersiones_MX
dc.rightsopenAccesses_MX
dc.rights.urihttp://creativecommons.org/licenses/by/4.0es_MX
dc.subject.classificationINGENIERÍA Y TECNOLOGÍA::CIENCIAS TECNOLÓGICAS::TECNOLOGÍA MÉDICA::OTRAS
dc.subject.classificationCIENCIAS FÍSICO MATEMÁTICAS Y CIENCIAS DE LA TIERRA::MATEMÁTICAS::CIENCIA DE LOS ORDENADORES::INTELIGENCIA ARTIFICIAL
dc.subject.keywordComputer Vision
dc.subject.keywordCNN
dc.subject.keywordVGG19
dc.subject.keywordMachine Learning
dc.subject.keywordAlzheimer's disease
dc.subject.keywordClassification
dc.subject.lcshTechnology
dc.subject.lcshMedicine
dc.titleDeep Learning Approach for Alzheimer’s Disease Classification: Integrating Multimodal MRI and FDG- PET Imaging Through Dual Feature Extractors and Shared Neural Network Processinges_MX
dc.typeTesis de Maestría / master Thesises_MX

Files

Original bundle

Now showing 1 - 3 of 3
Loading...
Thumbnail Image
Name:
VegaGuzman_TesisMaestria.pdf
Size:
4.49 MB
Format:
Adobe Portable Document Format
Description:
Tesis Maestría
Loading...
Thumbnail Image
Name:
VegaGuzman_CartaAutorizacion.pdf
Size:
236.51 KB
Format:
Adobe Portable Document Format
Description:
Carta Autorización
Loading...
Thumbnail Image
Name:
VegaGuzman_FirmasActadeGrado.pdf
Size:
593.48 KB
Format:
Adobe Portable Document Format
Description:
Firmas Acta de Grado

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
1.3 KB
Format:
Item-specific license agreed upon to submission
Description:
logo

El usuario tiene la obligación de utilizar los servicios y contenidos proporcionados por la Universidad, en particular, los impresos y recursos electrónicos, de conformidad con la legislación vigente y los principios de buena fe y en general usos aceptados, sin contravenir con su realización el orden público, especialmente, en el caso en que, para el adecuado desempeño de su actividad, necesita reproducir, distribuir, comunicar y/o poner a disposición, fragmentos de obras impresas o susceptibles de estar en formato analógico o digital, ya sea en soporte papel o electrónico. Ley 23/2006, de 7 de julio, por la que se modifica el texto revisado de la Ley de Propiedad Intelectual, aprobado

DSpace software copyright © 2002-2026

Licencia