Ciencias Exactas y Ciencias de la Salud
Permanent URI for this collectionhttps://hdl.handle.net/11285/551039
Pertenecen a esta colección Tesis y Trabajos de grado de las Maestrías correspondientes a las Escuelas de Ingeniería y Ciencias así como a Medicina y Ciencias de la Salud.
Browse
Search Results
- Neuroimaging-based pain detector using artificial intelligence approaches(Instituto Tecnológico y de Estudios Superiores de Monterrey, 2025-06-17) Macías Padilla, Brayhan Alan; Hernández Rojas, Luis Guillermo; emipsanchez; Ochoa Ruiz, Gilberto; Mendoza Montoya,Omar; Chailloux Peguero, Juan David; School of Engineering and Sciences; Campus Monterrey; Antelis Ortiz, Javier MauricioChronic pain is a complex, multifactorial experience that varies significantly across time, sex, and individual physiology. This thesis presents the development of a deep learning-based sys- tem for classifying pain-related brain activity using functional magnetic resonance imaging (fMRI) from a rodent model of a comorbid pain condition (masseter muscle inflammation fol- lowed by stress) that induces chronic visceral pain hypersensitivity (CPH). The proposed sys- tem evaluates the potential ofconvolutional neural networks (CNNs) to detect pain-associated neural patterns under different experimental conditions.Three variations of the VGG16 architecture were implemented and tested: a modified 2D VGG16 adapted to 3D volumes, a multiview 2D ensemble (M2D) fed with axial, sagittal, and coronal slices, and a fully 3D VGG16 model. After an initial benchmarking phase using data from rest sessions, the 3D VGG16 model was selected for subsequent experiments due to its consistent performance and the ability to learn from full volumetric input.Classification tasks involved multiple comparison scenarios, including sex differences, longitudinal progression of pain (from baseline to weeks 1 and week 7 after the CPH pro- cedure), and the impact of data selection strategies (full rest sessions vs. distension-specific volume extraction). Grad-CAM was used to provide anatomical interpretation of model at- tention, revealing consistent activation of pain-related brain regions such as the insular cortex, somatosensory cortex, thalamic nuclei, and prelimbic area, with marked differences observed between male and female subjects.The results demonstrate the feasibility of using deep neural networks, combined with explainable AI techniques, to decode and interpret pain-related patterns in fMRI data. Fur- thermore, the performance trends observed in classification tasks align with behavioral find- ings reported in the literature, supporting the potential of AI-driven neuroimaging analysis to uncover meaningful biological signatures of chronic pain.This study builds directly upon the work conducted by Da Silva et. al. [1], who previ- ously processed the same dataset to generate VMR representations and statistical t-maps from fMRI data. His analysis focused on identifying regions with significant activation differences between conditions using traditional statistical parametric mapping. Expanding on this foun- dation, the present research integrates deep learning methods, specifically 3D convolutional neural networks (CNNs), to classify experimental conditions directly from the fMRI volumes. Moreover, it incorporates explainable AI techniques (Grad-CAM) to reveal the spatial patterns most influential to classification. This approach offers a shift from region-centric hypothesis testing toward a data-driven, whole-brain interpretability framework, enabling the detection of distributed neural patterns that might not reach statistical significance individually but are collectively informative.
- Adaptive learning for providing inclusive contents based on student profile in digital education(Instituto Tecnológico y de Estudios Superiores de Monterrey, 2024-12-03) Alvarado Reyes, Ignacio; Molina Espinosa, José Martín; emipsanchez; Icaza Longoria, Inés Alvarez; Suárez Brito, Paloma; School of Engineering and Sciences; Campus Estado de MéxicoThe application of artificial intelligence technologies in educational fields has been increasing in the last years, especially with the implementation of adaptive learning technologies, designed to monitor different characteristics of students and provide them with content and suggestions aimed at improving their performance and avoiding problems they may have on digital platforms. In this study, the reference framework for student classification was explored with a proposal of the contents and accessibility functions that could be applied based on their learning characteristics, complemented by an implementation of adaptive learning technologies consisting of a classifier based on the decision tree algorithm that automatically processes student data and classify them within the classes defined in the framework. For the implementation of the classifier, it was trained with two data sets, initially with data generated in the laboratory and later with experimental data, obtained through a survey aimed at higher education students. Both instances of the trained algorithm demonstrated high accuracy for the classification process (99.98% with synthetic data and 95.94% with experimental data). Subsequently, through the same survey, the suggestions related to the classes assigned to the students were validated, as well as the suggested accessibility features and content. The suggestions seem to have a favorable acceptance range with rejection percentages between 0% and 6% for the content selections and between 14% and 34% for the accessibility options. With this dynamic implementation of educational content and digital accessibility features, we seek to provide personalized learning for different student profiles while seeking to implement more features related to compliance with concerns about diversity and inclusion.
- Smart camera FPGA hardware implementation for semantic segmentation of wildfire imagery(Instituto Tecnológico y de Estudios Superiores de Monterrey, 2024-06-13) Garduño Martínez, Eduardo; Rodriguez Hernández, Gerardo; mtyahinojosa, emipsanchez; Gonzalez Mendoza, Miguel; Hinojosa Cervantes, Salvador Miguel; School of Engineering and Sciences; Campus Monterrey; Ochoa Ruiz, GilbertoIn the past few years, the more frequent occurrence of wildfires, which are a result of climate change, has devastated society and the environment. Researchers have explored various technologies to address this issue, including deep learning and computer vision solutions. These techniques have yielded promising results in semantic segmentation for detecting fire using visible and infrared images. However, implementing deep learning neural network models can be challenging, as it often requires energy-intensive hardware such as a GPU or a CPU with large cooling systems to achieve high image processing speeds, making it difficult to use in mobile applications such as drone surveillance. Therefore, to solve the portability problem, an FPGA hardware implementation is proposed to satisfy low power consumption requirements, achieve high accuracy, and enable fast image segmentation using convolutional neural network models for fire detection. This thesis employs a modified UNET model as the base model for fire segmentation. Subsequently, compression techniques reduce the number of operations performed by the model by removing filters from the convolutional layers and reducing the arithmetic precision of the CNN, decreasing inference time and storage requirements and allowing the Vitis AI framework to map the model architecture and parameters onto the FPGA. Finally, the model was evaluated using metrics utilized in prior studies to assess the performance of fire detection segmentation models. Additionally, two fire datasets are used to compare different data types for fire segmentation models, including visible images, a fusion of visible and infrared images generated by a GAN model, fine-tuning of the fusion GAN weights, and the use of visible and infrared images independently to evaluate the impact of visible-infrared information on segmentation performance.
- Aspect based sentiment analysis in students’ evaluation of teaching(Instituto Tecnológico y de Estudios Superiores de Monterrey, 2024-05) Acosta Ugalde, Diego; Conant Pablos, Santiago Enrique; mtyahinojosa, emipsanchez; Guitérrez Rodríguez, Andrés Eduardo; Juárez Jiménez, Julio Antonio; Morales Méndez, Rubén; School of Engineering and Sciences; Campus Monterrey; Camacho Zuñiga, ClaudiaStudent evaluations of teachings (SETs) are essential for assessing educational quality. Natural Language Processing (NLP) techniques can produce informative insights from these evaluations. The large quantity of text data received from SETs has surpassed the capacity for manual processing. Employing NLP to analyze student feedback offers an efficient method for understanding educational experiences, enabling educational institutions to identify patterns and trends that might have been difficult, if not impossible, to notice with a manual analysis. Data mining using NLP techniques can delve into the thoughts and perspectives of students on their educational experiences, identifying sentiments and aspects that may have a level of abstraction that the human analysis cannot perceive. I use different NLP techniques to enhance the analysis of student feedback in the form of comments and provide better insights and understanding into factors that influence students’ sentiments. This study aims to provide an overview of the various approaches used in NLP and sentiment analysis, focusing on analyzing the models and text representations used to classify numerical scores obtained from the text feedback of a corpus of SETs in Spanish. I provide a series of experiments using different text classification algorithms for sentiment classification over numerical scores of educational aspects. Additionally, I explore two Aspect Based Sentiment Analysis (ABSA) models, a pipeline and a multi-task approach, to extract broad and comprehensive insights from educational feedback for each professor. The results of this research demonstrate the effectiveness of using NLP techniques for analyzing student feedback. The sentiment classification experiments showed favorable outcomes, indicating that it is possible to utilize student comments to classify certain educational scores accurately. Furthermore, the qualitative results obtained from the ABSA models, presented in a user-friendly dashboard, highlight the efficiency and utility of employing these algorithms for the analysis of student feedback. The dashboard provides valuable insights into the sentiments expressed by students regarding various aspects of their educational experience, allowing for a more comprehensive understanding of the factors influencing their opinions. These findings highlight the potential of NLP in the educational domain, offering a powerful tool for institutions to gain a deeper understanding of student perspectives and make data-driven decisions to enhance the quality of education.
- Caption generation with transformer models across multiple medical imaging modalities(Instituto Tecnológico y de Estudios Superiores de Monterrey, 2023-06) Vela Jarquin, Daniel; Santos Díaz, Alejandro; dnbsrp; Soenksen, Luis Ruben; Montesinos Silva, Luis Arturo; Ochoa Ruiz, Gilberto; School of Engineering and Sciences; Campus Monterrey; Tamez Peña, José GerardoCaption generation is the process of automatically providing text excerpts that describe relevant features of an image. This process is applicable to very diverse domains, including healthcare. The field of medicine is characterized by the vast amount of visual information in the form of X-Rays, Magnetic Resonances, Ultrasound and CT-scans among others. Descriptive texts generated to represent this kind of visual information can aid medical professionals to achieve a better understanding of the pathologies and cases presented to them and could ultimately allow them to make more informed decisions. In this work, I explore the use of deep learning to face the problem of caption generation in medicine. I propose the use of a Transformer model architecture for caption generation and evaluate its performance on a dataset comprised of medical images that range across multiple modalities and represented anatomies. Deep learning models, particularly encoder-decoder architectures have shown increasingly favorable results in the translation from one information modality to another. Usually, the encoder extracts features from the visual data and then these features are used by the decoder to iteratively generate a sequence in natural language that describes the image. In the past, various deep learning architectures have been proposed for caption generation. The most popular architectures in the last years involved recurrent neural networks (RNNs), Long short-term memory (LSTM) networks and only recently, the use of Transformer type architectures. The Transformer architecture has shown state-of-the art performance in many natural language processing tasks such as machine translation, question answering, summarizing and not long ago, caption generation. The use of attention mechanisms allows Transformers to better grasp the meaning of words in a sentence in a particular context. All these characteristics make Transformers ideal for caption generation. In this thesis I present the development of a deep learning model based on the Transformer architecture that generates captions for medical images of different modalities and anatomies with the ultimate goal to aid professionals improve medical diagnosis and treatment. The model is tested on the MedPix online database, a compendium of medical imaging cases and the results are reported. In summary, this work provides a valuable contribution to the field of automated medical image analysis
- COVID-19 mortality prediction using deep neural networks(Instituto Tecnológico y de Estudios Superiores de Monterrey, 2022-06) García Zendejas, Arturo; MORALES MENENDEZ, RUBEN; 30452; REPOSITORIO NACIONAL CONACYT; Morales Menéndez, Rubén; emipsanchez; School of Engineering and Sciences; Campus MonterreyCOVID - 19 disease caused by the virus SARS-CoV2 appeared in Wuhan China in 2019, in March 11th 2020 it was declared a global pandemics, taking by March 2022 over 5,783,700 lives around the world. COVID-19 spreads in several different ways, the virus SARS-CoV2 which causes COVID-19 can spread from a mouth or nose of a person who is infected through liquid particles whenever they cough, sneeze, speak or breath. Initial symptoms and development of the illness are catalogued as mild, because of that it may be difficult to identify which persons will more probably develop severe disease. One great support that can be given to medical centers and healthcare workforce would be the ability to predict which patients will have a greater risk of death and would develop more quickly and severe illness, in order to make triage for treatment and decisions about resources distribution. Machine learning and specifically Deep Learning works by modelling hierarchical representations behind data, aiming to classify or predict patterns by stacking multiple layers of information. Some of its main applications are speech recognition, natural language processing, audio recognition, autonomous vehicles and even medicine. In medicine, it has been used to predict how a disease develops and affects patients. During this thesis it was done a research and comparison of state of the art articles and models that aim to predict the behavior and development of COVID-19 patients and the illness itself. Their different datasets, metrics, models and results have been studied and used as a base in order to create the proposed models of the thesis. This research project proposes the use of machine learning models to predict the mortality of COVID-19 patients by using as input attributes of the patients such as vital signs, biomarkers, comorbidities and diagnostics. This data was obtained for training and testing purposes from different medical centers, such as HM Hospitals, San Jose Hospital and CEM Hospital. The main Deep Learning model used during this thesis is a Deep Multi-layer Perceptron Neural Network which uses static attributes, and a Long-Short Term Memory Recurrent Neural Network using dynamic attributes. A mixed model combining the static and dynamic model was also created. It was also used metrics that support the reduction of false negative cases, the Maximum Probability of Correct Decision is the main metric to evaluate and optimize the model. The models have been evaluated and compared with another machine learning models such as Random Forest and eXtreme Gradient Boosting over the different datasets.
- Use of reinforcement learning to help players improve their skills in Super Smash Bros. Melee(Instituto Tecnológico y de Estudios Superiores de Monterrey, 2021-11-22) Estrada Valles, Jorge Alberto; Ramírez Uresti, Jorge Adolfo; puelquio/tolmquevedo; Morales Manzanares, Eduardo; Sosa Hernández, Víctor Adrián; Medina Pérez, Miguel Ángel; Ingenieria y Ciencias; Campus MonterreyeSports have become a huge industry in recent years which has led to more and more people being interested in competing as professional players, however not all players have the same opportunities as things like the current residence of the player are a huge factor. This is especially true for fighting games as people who live in small cities or countries usually have the problem of finding people with whom to practice and even then it may not be the best practice, so people opt to play against in-game AI which is also not good practice. Due to this problem new and more accessible ways for players to train must be created which is why a reinforcement learning solution is proposed. In this thesis, we present a solution using Proximal Policy Optimization to help people train when their best option is against the in-game AI. Furthermore, several additions, namely multiple time step actions, reward shaping, and specialized training; are suggested to optimize the created model to be used as a training partner by a human. To evaluate the effectiveness of the resulting model the game named Super Smash Bros. Melee was used to compare the improvement achieved by training against our bot and against the in-game AI. The results show that people that trained against the bot improved more than the people that trained against the AI, proving that it is a good way to help players train for eSport competitions.
- Feature transformations for improving the performance of selection hyper-heuristics on job shop scheduling problems(Instituto Tecnológico y de Estudios Superiores de Monterrey) Garza Santisteban, Fernando; Terashima Marín, Hugo; Özcan, Ender; School of Engineering and Sciences; School of Engineering and Sciences; Campus Monterrey; Amaya Contreras, IvanSolving Job Shop (JS) scheduling problems is a hard combinatorial optimization problem. Nevertheless, it is one of the most present problems in real-world scheduling environments. Throughout the recent computer science history, a plethora of methods to solve this problem have been proposed. Despite this fact, the JS problem remains a challenge. The domain it- self is of interest for the industry and also many operations research problems are based on this problem. The solution to JS problems is overall beneficial to the industry by generating more efficient processes. Authors have proposed solutions to this problem using dispatch- ing rules, direct mathematical methods, meta-heuristics, among others. In this research, the application of feature transformations for the generation of improved selection constructive hyper-heuristics (HHs) is shown. There is evidence that applying feature transformations on other domains has produced promising results; Also, no previous work was found where this approach has been used for the JS domain. This thesis is presented to earn the Master’s degree in Computer Science of Tecnolo ́gico de Monterrey. The research’s main goals are: (1) the assessment of the extent to which HHs can perform better on JS problems than single heuristics, and that they are not specific to the instances used to train them; and (2), the degree to which HHs generated with feature transformations are revamped. Experiments were carried out using instances of various sizes published in the literature. The research involved profiling the set of heuristics chosen, ana- lyzing the interactions between the heuristics and feature values throughout the construction of a solution, and studying the performance of HHs without transformations and by using two transformations found in the literature. Results indicate that for the instances used, HHs were able to outperform the results achieved by single heuristics. Regarding feature transforma- tions, it was found that they induce a scaling effect to feature values throughout the solution process, which produces more stable HHs, with a median performance comparable to HHs without feature transformations, but not necessarily better. Results are conclusive in terms of the objectives of this research. Nevertheless, there are several ideas that could be explored to improve the HHs, which are outlined and discussed in the final Chapter of the thesis. The following major contributions are derived from this research: (1) applying a se- lection constructive HH approach, with feature transformations, to the JS domain; (2) the rationale behind the JS subproblem dependance in terms of the solution paths followed by the heuristics, which has a great impact in the training process of the HHs; (3) a method to deter- mine the most suitable parameters to apply feature transformations, which could be extended for other domains of combinatorial optimization problems; and (4) a framework for studying HHs in the Job Shop domain.
- Detection of Violent Behavior in Open Environments Using Pose Estimation and Neural Networks(Instituto Tecnológico y de Estudios Superiores de Monterrey) Chong Loo, Kevin Brian Kwan; TERASHIMA MARIN, HUGO; 65879; Terashima Marín, Hugo; tolmquevedo, emipsanchez; Conant Pablos, Santiago Enrique; Escuela de Ingeniería y Ciencia; Campus MonterreyPeople’s safety and security have always been an issue to attend. With the coming of techno- logical advances, part of it has been used to improve safeguards, though other aspects, without precautions, have made people even more vulnerable. People can get their sensitive data stolen or become victims of transaction fraud. These may be crimes done without physical interac- tion, but felonies with physical violence still exist. Some solutions for pedestrian safety are guards, police cars patrolling, sensors and security cameras. Nonetheless, these methods only react when the crime is happening or, even more critical, when it has already occurred, and the damage has been done. Therefore, numerous methods have been implemented using Arti- ficial Intelligence in order to solve this problem. Many approaches to detect violent behavior and action recognition rely on 3D convolutional neural networks (3D CNNs), spatial tempo- ral models, long short term memory networks, pose estimation among other implementations. However, in the current state of the art, how these approaches are used do not work perfectly and are not adapted to an uncontrolled environment. Therefore, a significant contribution from this work was the development of a new solu- tion model that is able to detect violent behavior. This approach focuses on using pedestrian detection, tracking, pose estimation and neural networks to predict pedestrian behavior in video frames. This method uses a time window frame to extract joint angles, given by the pose estimation algorithm, as features for classifying behavior. At the moment of developing this thesis project, there were not many databases with violent behavior videos. The ones that existed were low quality; cluttered were pedestrians cannot be seen clearly, and with unfixed camera angles. Consequently, another important contribution of this work was creating a new database, Kranok-NV, with a total of 3,683 normal and violent videos. This database was used to train and test the solution model. For the evaluation, a protocol was designed using 10-fold cross- validation. With the implemented solution model, accuracy of more than 98% was achieved on the Kranok-NV database. This approach surpassed the performance of state of the art methods for violence detection and action recognition in the developed database. Though this new solution model is able to detect violent and normal behavior, it can be easily extended to classify more types of behaviors. Further work requires to test this approach in emerging databases of videos and optimize specific areas of the solution model. Additionally, the contributions of this work can aid in the development of new approaches.

