Ciencias Exactas y Ciencias de la Salud
Permanent URI for this collectionhttps://hdl.handle.net/11285/551014
Pertenecen a esta colección Tesis y Trabajos de grado de los Doctorados correspondientes a las Escuelas de Ingeniería y Ciencias así como a Medicina y Ciencias de la Salud.
Browse
Search Results
- A methodology to select downsized object detection algorithms for resource-constrained hardware using custom-trained datasets(Instituto Tecnológico y de Estudios Superiores de Monterrey, 2025-12-03) Medina Rosales, Adán; Ponce Cruz, Pedro; emipsanchez; López Cadena, Edgar Omar; Montesinos Silva, Luis Arturo; Balderas Silva, David Christopher; Ponce Espinosa, Hiram Eredín; School of Engineering and Sciences; Campus Ciudad de MéxicoDownsized object detection algorithms have gained relevance with the exploration of edge computing and implementation of these algorithms in small mobile devices like drones or small robots. This has led to an exponential growth of the field with several new algorithms being presented every year. With no time to test them all most benchmark focus on testing the full sized versions and comparing training results. This however, creates a gap in the state of the art since no comparisons of downsized algorithms are being presented, specifically using custom built datasets to train the algorithms and restrained hardware devices to implement them. This work aims to provide the reader with a comprehensive understanding of several metrics obtained not only from training metrics, but also from implementation to have a more complete picture on the behavior of the downsized algorithms (mostly from the YOLO algorithm family), when trained with small datasets, by using a fiber extrusion device with three classes: one that has no defects, one that is very similar looking with small changes and one that has a more immediate tell in the difference, showcasing how good the algorithms tell apart each class using two different size of datasets, while also providing information on training times and different restrained hardware implementation results. Providing results on implementation metrics as well as training metrics.
- Comparative study of mass-accommodation methods and energy balances for melting paraffin wax in cylindrical thermal energy storage systems(Instituto Tecnológico y de Estudios Superiores de Monterrey, 2025-12-03) Silva Nava, Valter; Otero Hernández, José Antonio; Hernández Cooper, Ernesto Manuel; emimmayorquin, emipsanchez; Santiago Acosta, Rubén Darío; Melo Máximo, Dulce Viridiana; School of Engineering and Sciences; Campus Ciudad de México; Chong Quero, Jesús EnriqueThis study introduces two innovative methods for modeling how paraffin wax melts inside a centrally heated annular space. Both approaches tackle the challenge of volume changes during melting by ensuring total mass is conserved, keeping the material mass constant, and adding a new equation of motion. To manage these volume shifts in a cylindrical setup, one method allows the outer radius to expand or contract radially, while the other treats the extra liquid volume as a dynamic variable along the central axis. Each method’s energy–mass balance at the boundary between the liquid and solid yields equations that describe how the interface moves, with only slight differences that still respect mass conservation. When melting occurs rapidly, the steady-state values for both volume and interface position are directly linked to the densities of the liquid and solid forms. The methods were put to the test in a vertical annular region filled with para!n wax, where thermodynamic properties were fine-tuned by minimizing the gap between measured and predicted temperatures. The widely used local energy balance at the melting front can sometimes mislead, depending on starting conditions, boundaries, and material traits. In contrast, the total energy balance method aligns closely with equilibrium, as shown by its agreement with thermodynamic equilibrium in saturated mixtures, and it delivers much smaller errors than the local approach. In a melting experiment using para!n RT50 inside a thermally insulated cylinder, the local energy balance underestimated the melting front position by 2.4% to 6.9%, whereas the total energy balance method kept discrepancies between 0.28% and 5.71%.
- A hybrid multi-objective optimization approach to neural architecture search for super resolution image restoration(Instituto Tecnológico y de Estudios Superiores de Monterrey, 2025-07) Llano García, Jesús Leopoldo; Monroy Borja, Raúl; emipsanchez; Cantoral Ceballos, José Antonio; Mezura Montes, Efrén; Rosales Pérez, Alejandro; Ochoa Ruiz, Gilberto; School of Engineering and Sciences; Campus Estado de México; Sosa Hernández, Víctor AdriánSuper-resolution image restoration (SRIR) aims to reconstruct a high-resolution image from a degraded low-resolution input. It plays a key role in domains such as surveillance, medical imaging, and content creation. While recent approaches rely on deep neural networks, most architectures remain handcrafted through laborious and error-prone trial-and-error processes. Neural Architecture Search (NAS) seeks to automate the design of deep models, balancing predictive accuracy with constraints like latency and memory usage. Formulating NAS as a bi-level, multi-objective optimization problem highlights these trade-offs and motivates the development of flexible search spaces and strategies that prioritize both performance and efficiency.Prior NAS efforts for SRIR frequently rely on fixed cell structures, scalarized objectives, or computationally intensive pipelines, limiting their practicality on resourceconstrained platforms. Benchmarking shows that such methods often struggle to jointly minimize parameters, FLOPs, and inference time without compromising image reconstruction quality.We propose the Branching Architecture Search Space (BASS), a layer-based, multidepth, multi-branch design that supports dynamic selection, allocation, and repetition of operations. To explore BASS, we introduce a hybrid NAS framework that combines NSGA-III with hill-climbing refinements, guided by SynFlow as a zero-cost trainability estimator. The hybrid approach achieves superior trade-offs in trainability, parameter efficiency, and computational cost when given the same number of function evaluations as vanilla NSGA-III—and reaches comparable Pareto-front approximations with substantially fewer evaluations. The resulting solutions offer enhanced model quality, reduced complexity, and improved deployment suitability for real-world SRIR tasks.Extensive search experiments yield a diverse Pareto front of candidate architectures. Representative designs are fully trained on DIV2K and evaluated across standard SR benchmarks (Set5, Set14, BSD100, Urban100) at →2, →3, and →4 upscales. Balanced models achieve competitive PSNR while operating with significantly fewer parameters and FLOPs than heavyweight baselines. The hybrid search demonstrates faster convergence and improved trade-off resolution compared to single-strategy alternatives, as supported by Bayesian statistical analysis.The combination of BASS and hybrid NSGA-III enables the discovery of SRIR architectures that effectively balance accuracy and resource constraints. This approach facilitates deployment on embedded and real-time systems and offers a generalizable framework for resource-aware NAS across other dense prediction tasks.
- Geospatial location-allocation optimization for maximum coverage in pharmacy-based immunization strategies(Instituto Tecnológico y de Estudios Superiores de Monterrey, 2025-06) Romero Mancilla, Marisol Saraí; Mora Vargas, Jaime; emipsanchez; Santos Borbolla, Cipriano Arturo; Regis Hernández, Fabiola; Smith Cornejo, Neale Ricardo; School of Engineering and Sciences; Campus Ciudad de México; Ruiz, Angel(Only 1 page) Since the beginning of the Coronavirus disease outbreak in 2019, around 184 million cases and 4 million deaths have been reported worldwide [101]. The early decisions made by the entities responsible for managing the health emergency in each country were crucial to define their future. Detection with massive population screening tests, containment with the isolation of suspected and infected cases, and preparation of the healthcare system to face the demand were key factors in controlling the transmission of the disease. Therefore, the primary objective of this thesis is to develop an adaptable integrated geospatial model that ensures efficient distribution of vaccines during health emergencies, illustrating this through a case study in Jalisco, Mexico. In order to complete the aforementioned, a literature review on pharmacy-based im- munization was conducted before developing the mathematical approach. Subsequently, a facility location allocation problem and a hybrid approach known as fix-and-optimize were used to solve larger instances, using the Gurobi optimization software. In addition to contributing to the literature on Humanitarian Logistics in health sys- tems, the results will inform national policymakers at the tactical and strategic decision levels about the development of anticipatory governance for managing resources during situations arising from the Covid-19 pandemic.
- Deep learning and natural language processing for computer aided diagnosis(Instituto Tecnológico y de Estudios Superiores de Monterrey, 2025-06) Hussain, Sadam; Tamez Peña, Jose Gerardo; emipsanchez; Santos Díaz, Alejandro; Martínez Ledesma, Juan Emmanuel; Bron, Esther E.; Mery, Domingo; School of Engineering and Sciences; Campus MonterreyMultimodal artificial intelligence (AI) is a cutting-edge technique that integrates diverse modalities, such as imaging and textual data, to enhance classification and regression tasks. This dissertation focuses on the integration, comparison, and evaluation of multimodal AI for breast cancer diagnosis and prognosis. To achieve these objectives, we curated a comprehensive multimodal dataset comprising digital mammograms and corresponding radiological reports. Leveraging this dataset, we introduced and assessed various state-of-the-art (SOTA) multimodal techniques for three key tasks: breast cancer classification, reduction of false-positive biopsies with explainable AI (XAI), and short- term (5-year) risk prediction of breast cancer.In this work, we also introduced a benchmark dataset of radiological reports from breast cancer patients and provided baseline performance evaluations using SOTA machine learning (ML), deep learning (DL), and large language models (LLMs) for BI-RADS category classification. Our approach evaluated the performance of diverse SOTA multimodal architectures, including ResNet, VGG, E!cientNet, MobileNet, and Vision Transformers (ViT). For textual data processing, we employed both general-purpose and domain-specific pretrained LLMs such as BERT, bioGPT, ClinicalBERT, and DeBERTa, which were also integrated into multimodal architectures for enhanced lassification.Notably, our proposed multiview multimodal feature fusion (MMFF) architecture, combining SE-ResNet50 with an artificial neural network (ANN), achieved an AUC of 0.965 for breast cancer classification, significantly outperforming both single-modal and multimodal SOTA architectures. For reducing unnecessary breast biopsies, our multimodal approach achieved an AUC of 0.72, showcasing its clinical utility in minimizing patient burden. Moreover, our ViT and bioGPT-based multimodal architecture achieved an AUC of 0.77 for short-term risk prediction, outperforming the SOTA MIRAI model, which achieved an AUC of 0.59 on our in-house dataset. This work highlights the potential of multimodal AI in advancing breast cancer diagnosis and prognosis, demonstrating its superiority over traditional and unimodal approaches across multiple critical tasks.
- Modelling and Control Methodologies for Automated Systems Based on Regulation Control and Coloured Petri Nets(Instituto Tecnológico y de Estudios Superiores de Monterrey, 2024-12-02) Anguiano Gijón, Carlos Alberto; Vázquez Topete, Carlos Renato; emimmayorquin; Navarro Gutiérrez, Manuel; Navarro Díaz, Adrán; Mercado Rojas, José Guadalupe; School of Engineering and Sciences; Campus Monterrey; Ramírez Treviño, AntonioIndustry 4.0 and smart manufacturing have brought new interesting possibilities and chal-lenges to the industrial environment. One of these challenges is the large-scale automation of increasingly complex systems with minimal set-up time and flexibility, while allowing the in-tegration of components and systems from different manufacturers for production customiza-tion. To face this challenge, control approaches based on Discrete Event Systems (DES), such as Supervisory Control Theory (based on either, automata or Petri nets), Generalized Mutual Exclusions Constraints (GMEC) and Petri net-based Regulation Control, may provide con-venient solutions. However, few works have been reported in the literature for the case of complex systems and implementation in real plants. The latter opens up an important area of research opportunities. In this dissertation work, methodologies for modelling and control of automated systems based on the Regulation Control approach using interpreted Petri nets are studied. Using this approach, it is possible to capture the information of a system through its inputs and outputs, which allows to force sequences and generate more efficient controllers that can be directly translated to a Programmable Logic Controller (PLC). Through case studies, the effective-ness of these methodologies when implemented in more complex systems is demonstrated. Furthermore, the use of coloured Petri nets is proposed for the modelling of customized pro-duction systems. For this purpose, a new approach based on tensor arrays is introduced to express the colored Petri nets, allowing the use of algebraic techniques in the analysis of these systems.
- Methodological approach to incorporate Deep Generative and Natural Language Processing models in the engineering design process(Instituto Tecnológico y de Estudios Superiores de Monterrey, 2024-06) de la Tejera de la Peña, Javier Alberto; Ramírez Mendoza, Ricardo Ambrocio; emimmayorquin; Hernández Luna, Alberto Abelardo; Aguayo Téllez, Humberto; Anthony, Brian; School of Engineering and Sciences; Campus Monterrey; Bustamante Bello, Martín RogelioThe engineering design process allows to create or enhance designs to fulfill any particular need methodologically. With the advances in artificial intelligence, mainly in Deep Learning, a new perspective is coming to systems engineering design, aided by intelligent algorithms. However, a downfall of engineering design is the lack of quantitative outcomes, making troublesome the use of artificial intelligence. For this, one of the solutions is using axiomatic design (AD) in the design process of systems. The work presented in this thesis, introduces a methodology that involves classic engineering methodologies and the proposal of incorporating current state-of-the-art algorithms and models for the conceptual design, as the main contribution of the research work. This research work and methodology proposal is meant to reduce the time of the design process and understand the needs/requirements implicated in the same design process, with the possibility of developing more robust designs closer to the original needs/requirements.
- A methodology for modeling multiscale multiphysics nature that bridges basic science with sustainable manufacturing technologies using human and Artificial intelligence(Instituto Tecnológico y de Estudios Superiores de Monterrey, 2024-05-22) Estrada Diaz, Jorge Alfredo; Elías Zúñiga, Alex; emimmayorquin; Martínez Romero, Oscar; Palacios Pineda, Luis Manuel; Ruiz Huerta, Leopoldo; Escuela de Ingeniería y Ciencias; Campus Monterrey; Olvera Trejo, DanielThis dissertation deals with the modeling of multiscale multiphysics phenomena. These complex processes involve the interaction between physical occurrences of different nature, at different time and space scales, turning its description, prediction and control into a daunting task. Being pivotal technologies for the manufacturing of advanced materials, this work revolves around the complex technologies of Selective Laser Melting (SLM), electrospray, Ultrasonic Micro-Injection Molding (UMIM) and smart materials, i.e. Magneto-Rheological Elastomers (MRE). Modeling efforts are taken into action through classical yet powerful methodologies such as dimensional analysis and cutting-edge approaches such as fractal analysis and artificial intelligence, i.e., Artificial Neural Networks (ANNs) and Multiobjective Evolutionary Algorithms (MOEAs), with promising results that reflect on their ability to capture the intricate interplay of process parameters and material properties in these convoluted phenomena. Offering complementary benefits (attaining of meaningful physical insights and efficient handling computational processing operation and pattern identification in data, respectively) both approaches should be jointly exploited for handling multiscale multiphysics phenomena.
- Classification of EEG signals: an assistance approach for remote rehabilitation for the upper limb(Instituto Tecnológico y de Estudios Superiores de Monterrey, 2024) Lazcano Herrera, Alicia Guadalupe; Alfaro Ponce, Mariel; emipsanchez; Chairez Oria, Jorge Isaac; González Mendoza, Miguel; Guzmán Zavaleta, Zobeida Jezabel; School of Engineering and Sciences; Campus Estado de México; Fuentes Aguilar, Rita QuetziquelRehabilitation technologies help disabled people face the many challenges in their daily lives. As a consequence, there has been an increase in the interest in developing technologies such as Human-Computer Interfaces (HCI) and Brain-Computer Interfaces (BCI). These technologies can be triggered by many biosignals and their related studies or extraction techniques, being one of these biosignals the ones related to information on brain activity. Electroencephalography represents electrical brain activity as a form of brain signal; the records produced by this technique are called electroencephalograms (EEG).This technique involves the pickup of the biopotential, the signal conditioning, the signal recording, and the signal analysis, being one of their main goals the observation and analysis of brain responses to sensor stimuli.Despite the many advantages of the use of EEG signals and other technologies for BCI composition, one of the challenges we face is the complexity of interpreting and classifying EEG signals. This is where the use of Artificial Intelligence (AI) and Machine Learning(ML) algorithms becomes crucial. The development of ML algorithms for EEG signal analysis is not just a trend but a necessity in our quest to understand and harness the power of brain signals.Nowadays, to analyze brain signals, algorithms such as Neural networks have been used, and among all the architectures available, Recurrent Neural Networks become popular because they can provide context in their predictions. In this category can be found the Long- Short Term Memory (LSTM) networks, which are NN’s with a memory block that can ”store”information. Using this ML algorithm for the analysis of EEG signals could help develop new technologies that could assist impaired people aided with technologies like remote assistance or remote rehabilitation. The present dissertation aims to apply different techniques which involve Machine Learning (ML) techniques, to analyze, process, and classify EEG signals to integrate the information derived into an application that can be used to apply remote rehabilitation aid. This dissertation is divided into two major axes: one focuses on the EEG signals and analysis and the second axis is focused on the application of ML algorithms for classifying Motor/Imagery(MI) information that could be integrated into a remote rehabilitation application. It will discuss the results obtained in the use of Time-Domain and Frequency-Domain techniques for extraction features of EEG signals in publicly available datasets (Physionet Motor/Imagery dataset) and an acquired dataset that could replicate the information found in the literature, the application of ML algorithms for feature selection, the advantages of the normalization process, the application of Neural Networks (two types, recurrent neural networks, and convolutional neural networks) to classify EEG MI information and how can this be integrated into a platform for remote rehabilitation that helps to avoid the abandonment of therapy and that offers supports to take rehabilitation measures in remote places. These results remark the use of the BiLSTM NNs for EEG MI information classification with an accuracy of 91.25% and the use of the Convolutional Neural Network SquezeenNet with a maximum accuracy reported of 92.23%
- Analysis and use of textual definitions through a transformer neural network model and natural language processing(Instituto Tecnológico y de Estudios Superiores de Monterrey, 2021-12-02) Baltazar Reyes, Germán Eduardo; BALTAZAR REYES, GERMAN EDUARDO; 852898; Ponce Cruz, Pedro; puemcuervo; McDaniel, Troy; Balderas Silva, David Christopher; Rojas Hernández, Mario; School of Engineering and Sciences; Campus Ciudad de México; López Caudana, Edgar OmarThere is currently an information overload problem, where data is excessive, disorganized, and presented statically. These three problems are deeply related to the vocabulary used in each document since the usefulness of a document is directly related to the number of understood vocabulary. At the same time, there are multiple Machine Learning algorithms and applications that analyze the structure of written information. However, most implementations are focused on the bigger picture of text analysis, which is to understand the structure and use of complete sentences and how to create new documents as long as the originals. This problem directly affects the static presentation of data. For these past reasons, this proposal intends to evaluate the semantical similitude between a complete phrase or sentence and a single keyword, following the structure of a regular dictionary, where a descriptive sentence explains and shares the exact meaning of a single word. This model uses a GPT-2 Transformer neural network to interpret a descriptive input phrase and generate a new phrase that intends to speak about the same abstract concept, similar to a particular keyword. The validation of the generated text is in charge of a Universal Sentence Encoder network, which was finetuned for properly relating the semantical similitude between the total sum of words of a sentence and its corresponding keyword. The results demonstrated that the proposal could generate new phrases that resemble the general context of the descriptive input sentence and the ground truth keyword. At the same time, the validation of the generated text was able to assign a higher similarity score between these phrase-word pairs. Nevertheless, this process also showed that it is still needed deeper analysis to ponderate and separate the context of different pairs of textual inputs. In general, this proposal marks a new area of study for analyzing the abstract relationship of meaning between sentences and particular words and how a series of ordered vocables can be detected as similar to a single term, marking a different direction of text analysis than the one currently proposed and researched in most of the Natural Language Processing community.

