Ciencias Exactas y Ciencias de la Salud

Permanent URI for this collectionhttps://hdl.handle.net/11285/551014

Pertenecen a esta colección Tesis y Trabajos de grado de los Doctorados correspondientes a las Escuelas de Ingeniería y Ciencias así como a Medicina y Ciencias de la Salud.

Browse

Search Results

Now showing 1 - 8 of 8
  • Tesis de doctorado
    Modelling and Control Methodologies for Automated Systems Based on Regulation Control and Coloured Petri Nets
    (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2024-12-02) Anguiano Gijón, Carlos Alberto; Vázquez Topete, Carlos Renato; emimmayorquin; Navarro Gutiérrez, Manuel; Navarro Díaz, Adrán; Mercado Rojas, José Guadalupe; School of Engineering and Sciences; Campus Monterrey; Ramírez Treviño, Antonio
    Industry 4.0 and smart manufacturing have brought new interesting possibilities and chal-lenges to the industrial environment. One of these challenges is the large-scale automation of increasingly complex systems with minimal set-up time and flexibility, while allowing the in-tegration of components and systems from different manufacturers for production customiza-tion. To face this challenge, control approaches based on Discrete Event Systems (DES), such as Supervisory Control Theory (based on either, automata or Petri nets), Generalized Mutual Exclusions Constraints (GMEC) and Petri net-based Regulation Control, may provide con-venient solutions. However, few works have been reported in the literature for the case of complex systems and implementation in real plants. The latter opens up an important area of research opportunities. In this dissertation work, methodologies for modelling and control of automated systems based on the Regulation Control approach using interpreted Petri nets are studied. Using this approach, it is possible to capture the information of a system through its inputs and outputs, which allows to force sequences and generate more efficient controllers that can be directly translated to a Programmable Logic Controller (PLC). Through case studies, the effective-ness of these methodologies when implemented in more complex systems is demonstrated. Furthermore, the use of coloured Petri nets is proposed for the modelling of customized pro-duction systems. For this purpose, a new approach based on tensor arrays is introduced to express the colored Petri nets, allowing the use of algebraic techniques in the analysis of these systems.
  • Tesis de doctorado
    From words to sentences and back: characterizing context-dependent meaning representations in the brain
    (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2021-12-02) Aguirre Sampayo, Nora Elsa; TERASHIMA MARIN, HUGO; 65879; Miikkulainen, Risto; emipsanchez; Terashima Marin, Hugo; Cantú, Francisco; Grassmann, Uli; School of Engineering and Sciences; Campus Monterrey; Valenzuela Rendón, Manuel
    How do people understand concepts such as olive oil, baby oil, lamp oil,or oil paint? Embodied approaches to knowledge representation suggest that words are represented as a set of features that are the basic components of meaning. In particular, Binder et al. (2009) grounded this idea by mapping semantic features (attributes) to different brain systems in their Concept Attribute Representations (CAR) theory. Their fMRIexperiments demonstrated that when humans listen or read sentences,they use different brain systems to simulate seeing the scenes and performing the actions that are described. An intriguing challenge to this theoryis that concepts are dynamic, i.e.,word meaning depends on context. This dissertation addresses this challenge through the Context-dEpendent meaning REpresentations in the BRAin (CEREBRA) neural network model. Based on changes in the fMRI patterns, CEREBRA quantifies how word meanings change in the context of a sentence. CEREBRA was evaluated through three different computational experiments and through behavioral analysis. The experiments demonstrated that words in different contexts have different representations, that the changes observed in the concept attributes encode unique conceptual combinations, and that the new representations are more similar to the other words in the sentence than to the original representations. The behavioral analysis confirmed that the changes produced by CEREBRA are actionable knowledge that can be used to predict human responses. Together, these experiments constitute a comprehensive evaluation of CEREBRA’s context-based representations as well as the soundness of CAR theory. Thus, CEREBRA is a useful tool for understanding how semantic knowledge is represented in the brain, and for providinga human-like context-based representations for NLP applications
  • Tesis de doctorado
    Analysis and use of textual definitions through a transformer neural network model and natural language processing
    (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2021-12-02) Baltazar Reyes, Germán Eduardo; BALTAZAR REYES, GERMAN EDUARDO; 852898; Ponce Cruz, Pedro; puemcuervo; McDaniel, Troy; Balderas Silva, David Christopher; Rojas Hernández, Mario; School of Engineering and Sciences; Campus Ciudad de México; López Caudana, Edgar Omar
    There is currently an information overload problem, where data is excessive, disorganized, and presented statically. These three problems are deeply related to the vocabulary used in each document since the usefulness of a document is directly related to the number of understood vocabulary. At the same time, there are multiple Machine Learning algorithms and applications that analyze the structure of written information. However, most implementations are focused on the bigger picture of text analysis, which is to understand the structure and use of complete sentences and how to create new documents as long as the originals. This problem directly affects the static presentation of data. For these past reasons, this proposal intends to evaluate the semantical similitude between a complete phrase or sentence and a single keyword, following the structure of a regular dictionary, where a descriptive sentence explains and shares the exact meaning of a single word. This model uses a GPT-2 Transformer neural network to interpret a descriptive input phrase and generate a new phrase that intends to speak about the same abstract concept, similar to a particular keyword. The validation of the generated text is in charge of a Universal Sentence Encoder network, which was finetuned for properly relating the semantical similitude between the total sum of words of a sentence and its corresponding keyword. The results demonstrated that the proposal could generate new phrases that resemble the general context of the descriptive input sentence and the ground truth keyword. At the same time, the validation of the generated text was able to assign a higher similarity score between these phrase-word pairs. Nevertheless, this process also showed that it is still needed deeper analysis to ponderate and separate the context of different pairs of textual inputs. In general, this proposal marks a new area of study for analyzing the abstract relationship of meaning between sentences and particular words and how a series of ordered vocables can be detected as similar to a single term, marking a different direction of text analysis than the one currently proposed and researched in most of the Natural Language Processing community.
  • Tesis de doctorado
    A single-step and GPU-accelerated simplified lattice Boltzmann method for high turbulent flows on complex domains
    (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2021-08-03) Delgado Gutiérrez, Arturo Javier; Cárdenas Fuentes, Diego Ernesto; hermlugo; Roman Flores, Armando; Montesinos Castellanos, Alejandro; Jáuregui Correa, Juan Carlos; Marzocca, Pier; School of Engineering and Sciences; Campus Ciudad de México; Probst Oleszewski, Oliver Matthias
    Over the course of time, Computational Fluid Dynamics (CFD) has been proven to be one of the main pillars in the study of fluid mechanics. From simple aerodynamic designs to highly complex meteorological forecast, CFD models have been served as one of the best tools used for designers and engineers. Nevertheless, high fidelity CFD simulations often require a considerable amount of computational power in order to deliver accurate results. For this, the exponential growth of computational technologies has been helping to solve this problem, but significant computational cost can be reduced by improving the main theory for the numerical simulation. The Lattice Boltzmann Method (LBM) is a relatively new approach for the simulation of fluid dynamics. It has been proven that the overall computational efficiency of the LBM is critically better than the conventional Direct Numerical Simulations for the Navier-Stokes Equations. However, the conventional LBM algorithms are considerably limited for very specific simulation cases, such as fluid flows in a small range of the Reynolds number. This dissertation presents in detail a novel CFD model derived by combining the theory of conventional and more recent algorithms for the Single-relaxation time (SRT) LBM. The present model is entitled "Single-Step and Simplified Lattice Boltzmann Method" (SS-LBM) and is capable of delivering efficient and accurate results of fluid dynamics in a wide range of the Reynolds number, by coupling the main algorithm with an efficient sub-grid scale (SGS) turbulence model. Additionally, the SS-LBM model is also designed to simulate complex terrains, by generating the domain mesh with a parametric geometry based on the bi-variate (2D) and tri-variate (3D) Non-Uniform Rational B-Splines (NURBS) functions.In order to significantly improve the computational performance of the SS-LBM model, the main algorithm is designed for the execution on Graphics Processing Unit (GPU) architectures, through the well known OpenGL framework. By retaining the best properties of the recent LBM algorithms such as the Simplified-LBM (SLBM), the present model minimizes the memory size needed by only allocating the macroscopic flow variables (velocity and density), which are the main variables used to reconstruct the required probability distribution functions (pdf). The complete algorithm is tested by conducting numerous benchmark cases to validate and quantify the computational performance and spatial accuracy. For the 2D cases, the 2D Lid-Driven Cavity (LDC) benchmark for a wide range of the Reynolds number is reported, along with the simulation of the fluid flow across a 2D circle and NACA airfoils. For the 3D cases, a 3D LDC benchmark is also performed, followed by a 3D Jet-flow inside a cavity. Finally, the algorithm is tested for a case with a complex terrain with a local refinement around the desired surface.
  • Tesis de doctorado
    Método de homogeneización asintótica para el cálculo de propiedades efectivas de un material nanocompuesto en tres dimensiones
    (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2021-03-02) Tapia Gaspar, Mónica; TAPIA GASPAR, MONICA; 289617; Otero Hernández, José Antonio; puelquio; Martínez Rosado, Raúl; Hernández Cooper, Ernesto Manuel; Espinosa Almeyda, Yoanh; Escuela de Ingeniería y Ciencias; Campus Estado de México; Rodríguez Ramos, Reinaldo
    El modelado matemático de nanomateriales es un campo inmenso en proceso de desarrollo debido a la enorme demanda en el diseño de nuevos materiales nanocompuestos, tales como la industria automotriz y la industria aeroespacial. La formulación del Método de Homogeneización Asintótica (MHA) tiene un alcance adecuado para estimar las propiedades efectivas globales del compuesto. En este proyecto, las propiedades efectivas de materiales elásticos reforzados con nanoinclusiones de diferentes formas geométricas son modeladas mediante un método semi-analítico basado en el MHA y el Método de los Elementos Finitos (MEF). La funcionalidad del diseño de la celda unitaria tiene alcance para diferentes valores de fracción volumétrica y de razón de aspecto de la nanoinclusión. El contacto entre las nanoinclusiones y la matriz del compuesto se asume perfecto. El enfoque numérico está basado en la técnica del modelado de la celda periódica por medio del MEF. Es decir, los problemas locales y las propiedades efectivas obtenidas mediante el MHA son resueltos utilizando el MEF. De acuerdo a la descripción previa se tiene una combinación entre el MHA y el MEF para calcular las propiedades efectivas de los nanomateriales, a esta metodología la denominamos como Método de Elemento Finito Semi-Analítico (SAFEM, por sus siglas en inglés). El efecto de las diferentes orientaciones de los refuerzos ( fibras) influye en el comportamiento mecánico del compuesto cuando este se somete a esfuerzos que generan deformaciones. El enfoque de estudio está basado entonces en dos casos diferentes de compuestos reforzados por \Fibras alineadas"( fibras orientadas en una sola dirección) y compuestos reforzados por \Fibras desalineadas"( fibras orientadas en diferentes direcciones de acuerdo al marco espacial del diseño de la celda). Los resultados obtenidos mediante el método de SAFEM han demostrado que este es un m etodo novedoso y con un alcance muy amplio para el cálculo de las propiedades efectivas elásticas de materiales nanocompuestos. Además, se reportan comparaciones con resultados teóricos reportados en la literatura del campo de los métodos de estimación de propiedades efectivas elásticas. También, comparaciones con resultados experimentales reportados por otros autores para nanocompuestos reforzados con nanocables de carbono (CNW).
  • Tesis de doctorado
    On the Conditional In-Control Performance of two Nonparametric Control Charts for Location with Unknown Mean or Median
    (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2020-12-03) Villanueva Guerra, Elena Cristina; Tercero Gómez, Víctor Gustavo; emipsanchez; Cordero Franco, Alvaro Eduardo; Güemes Castorena, David; Jay Conover, William; Smith Cornejo, Neal Ricardo; School of Engineering and Sciences; Campus Monterrey; Beruvides, Mario
    Statistical process monitoring deals with the problem of assessing whether a process is in statistical control or not, through the use of control charts. Many of these charts rely on the knowledge of in-control parameters. However, when they are unknown, practitioners use an in-control sample to make estimations, or as a reference sample to follow a nonparametric procedure when no distribution function can be assumed. The effect of using estimates instead of known parameters, and how to deal with the problems that arise, have been studied over different control charts and practical situations. Nevertheless, the corresponding research on nonparametric control charts is scarce. This research attempts to reduce this gap by measuring, through the use of Monte Carlo simulations, the conditional effect of a reference sample on the in-control and out-of-control performances of two nonparametric CUSUM control charts based on the Wilcoxon and sequential normal scores statistics, in terms of the average run length and its variability. A nonparametric bootstrap procedure was also developed and evaluated to assist practitioners in designing ad-hoc control limits with the desired performance. The reference sample size was found to have a significant and negative effect on the average run length over both charts; simulations showed a need for large samples for relatively adequate performance. However, in almost all scenarios, the alternative based on sequential normal scores showed a smaller practitioner-to-practitioner variation. After the control limits calibration via bootstrap, the asymptotic results of the proposed nonparametric bootstrap showed a bias in performance.
  • Tesis de doctorado
    A methodology for prediction interval adjustment for short term load forecasting
    (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2020-12) Zúñiga García, Miguel Ángel; Batres Prieto, Rafael; hermlugo; Santamaría Bonfil, Guillermo (Co-advisor); Noguel Monroy, Juana Julieta; Ceballos Cancino, Héctor Gibrán; School of Engineering and Sciences; Campus Estado de México; Arroyo Figueroa, Gustavo
    Electricity load forecasting is an essential tool for the effective power grid operation and for energy markets. However, the lack of accuracy on the estimation of the electricity demand may cause an excessive or insufficient supply which can produce instabilities in the power grid or cause load cuts. Hence, probabilistic load forecasting methods have become more relevant since these allow to understand, not only load point forecasts but also the uncertainty associated with it. In this thesis, a framework to generate prediction models that generate prediction intervals is proposed. This framework is designed to create a probabilistic STLF model by completing a series of tasks. First, prediction models will be generated using a prediction method and a segmented time series dataset. Next, prediction models will be used produce point forecast estimations and errors will be registered for each subset. At the same time, an association rules analysis will be performed in the same segmented time series dataset to model cycling patterns. Then, with the registered errors and the information obtained by the association rules analysis, the prediction intervals are created. Finally, the performance of the prediction intervals is measured by using specific error metrics. This methodology is tested in two datasets: Mexico and Electric Reliability Council of Texas (ERCOT). Best results for Mexico dataset are a Prediction Interval Coverage Probability (PICP) of 96.49% and Prediction Interval Normalized Average Width 12.86, and for the ERCOT dataset a PICP of 94.93% and a PINAW of 3.6. These results were measured after a reduction of 14.75% and 5.25% in the prediction intervals normalized average width of the Mexico and ERCOT dataset respectively. Reduction of the prediction interval is important because it can helps in reducing the amount of electricity purchase, and reducing the electricity purchase even in 1% represents a large amount of money. The main contributions of this work are: a framework that can convert any point forecast model in a probabilistic model, the Max Lift rule method for selection of high quality rules, and the metrics probabilistic Mean Absolute Error and Root Mean Squared Error.
  • Tesis de doctorado
    A classifier-based fusion algorithm for latent fingerprint identification based on a neural network
    (Instituto Tecnológico y de Estudios Superiores de Monterrey) Valdes Ramirez, Danilo; VALDES RAMIREZ, DANILO; 862810; Medina Pérez, Miguel Ángel; emipsanchez; Gutiérrez Rodríguez, Andrés; Morales Moreno, Aythami; Loyola González, Octavio; García Borroto, Milton; Gutiérrez Rodríguez, Andrés E.; Escuela de Ingeniería y Ciencias; Campus Estado de México; Monroy, Raúl
    Human beings present a singular skin on the surface of their fingers with furrows, ridges, and sweet pores. Ridges and furrows describe distinct forms, such as points with maximum curvature, bifurcation and interruption, and particular ridges’ contours. Experts have classified those forms as level 1, 2, or 3 features. When the finger surface touches an object, it prints the features in a 2D image termed fingerprint due to the grease and the sweat released by the pores. Hitherto, there is no report of two fingerprints having the same features, not even in identical tweens. Fingerprints acquired from the object surfaces with unknown identity are latent fingerprints. Latent fingerprints have practical applications in criminal investigations and justice administration. Such sensible applications demand high accuracy and speed in the identification of a latent fingerprint. Although some authors have proposed latent fingerprint identification algorithms, they still consider insufficient the achieved identification rates for satisfying the sensitivity of the latent fingerprint applications. Some enhancements to the identification of latent fingerprints have been reported with fusion algorithms. However, the typical fusion scheme has been focused on weighted sums with weights empirically determined. The trending use of weighted sums has left room for improvements using classifier-based fusions. Additionally, the literature related to the latent fingerprint identification lacks of an exhaustive analysis of the suitability of the multiple fingerprint feature representations proposed, and the quantification of such a suitability. In this research, we analyze the appropriateness of several fingerprint feature representations for representing latent fingerprints; and we have found a preference for minutia descriptors. Hence, we develop a protocol for evaluating minutia descriptors in a closedset identification. With such a protocol, we determine the merit of nine minutia descriptors suitable for identifying latent fingerprints. As a result, we select four minutia descriptors as candidates for a fusion algorithm and tune their parameters for latent fingerprint identification. Next, we evaluate the four minutia descriptors with their global matching algorithms on subsets of latent fingerprints with good, bad, and ugly qualities. We find two of them reaching the highest identification rates for all subsets and ranks. Therefore, we propose a latent fingerprint identification algorithm with the fusion of these two algorithms using a neural network and four attributes as input, which characterize the fingerprints’ similarity. Experiments show that our proposal improves the baseline algorithms in 13 of 15 datasets created with databases NIST SD27, MOLF-IIITD, and GCBD. Our fusion algorithm reports the highest rank-1 identification rate (71.32%), matching the latent fingerprints in the NIST SD27 against 100,000 fingerprints, using only minutiae. Our algorithm takes six milliseconds to compare a fingerprint pair, which is a good time.
En caso de no especificar algo distinto, estos materiales son compartidos bajo los siguientes términos: Atribución-No comercial-No derivadas CC BY-NC-ND http://www.creativecommons.mx/#licencias
logo

El usuario tiene la obligación de utilizar los servicios y contenidos proporcionados por la Universidad, en particular, los impresos y recursos electrónicos, de conformidad con la legislación vigente y los principios de buena fe y en general usos aceptados, sin contravenir con su realización el orden público, especialmente, en el caso en que, para el adecuado desempeño de su actividad, necesita reproducir, distribuir, comunicar y/o poner a disposición, fragmentos de obras impresas o susceptibles de estar en formato analógico o digital, ya sea en soporte papel o electrónico. Ley 23/2006, de 7 de julio, por la que se modifica el texto revisado de la Ley de Propiedad Intelectual, aprobado

DSpace software copyright © 2002-2025

Licencia