Ciencias Exactas y Ciencias de la Salud
Permanent URI for this collectionhttps://hdl.handle.net/11285/551039
Pertenecen a esta colección Tesis y Trabajos de grado de las Maestrías correspondientes a las Escuelas de Ingeniería y Ciencias así como a Medicina y Ciencias de la Salud.
Browse
Search Results
- An explainable AI-based system for kidney stone classification using color and texture descriptors(Instituto Tecnológico y de Estudios Superiores de Monterrey, 2025-06) De Anda García, Ilse Karena; Ochoa Ruiz, Gilberto; emipsanchez; González Mendoza, Miguel; School of Engineering and Sciences; Campus Monterrey; Hinojosa Cervantes, Salvador MiguelKidney stone disease affects nearly 10% of the global population and remains a significant clinical and economic burden. Accurate classification of stone subtypes is essential for guiding treatment decisions and preventing recurrence. This thesis presents the design, implementation, and evaluation of an explainable artificial intelligence (XAI)-based dual-output system that predicts both the texture and color subtype of kidney stones using image-based descriptors. The proposed system extracts features from stone images captured in Section and Surface views and processes them through parallel branches optimized for texture and color. Texture classification is performed using an ensemble of PCA-reduced deep descriptors from InceptionV3, AlexNet, and VGG16. For color, the most effective model combined handcrafted HSV descriptors with PCA-compressed deep CNN features. These were fused into a dual-output architecture using a MultiOutputClassifier framework. The models were evaluated using five-fold cross-validation. Texture classification reached 98.67% ± 1.82 accuracy in Section and 95.33% ± 1.83 in Surface. Color classification achieved 90.67% ± 9.25 and 85.34% ± 11.93, respectively. Exact match accuracy for joint prediction was 91.4% in Section and 84.2% in Surface, indicating high coherence between the two outputs. Explainability was addressed through FullGrad visualizations and Weight ofFeature (WOF) analysis, both of which showed that the model relied on clinically meaningful image regions and that color features held slightly greater predictive influence. Compared to state-of-the-art approaches, including multi-view fusion models, the proposed method achieved a competitive performance while maintaining a modular and transparent structure. The findings validate the hypothesis that combining deep and handcrafted descriptors can enhance interpretability and, in some cases, performance. This work contributes a clinically aligned and interpretable framework for automated kidney stone classification and supports the integration of XAI into nephrological diagnostic workflows. Moreover, by providing interpretable dual predictions of color and texture, this system can support early preventive decisions aimed at reducing recurrence. Future work could explore advanced generative models to further expand diversity and clinical utility of synthetic data. Compared to state-of-the-art approaches, the proposed method achieved a competitive performance while maintaining a modular and transparent structure. The findings validate the hypothesis that combining deep and handcrafted descriptors can enhance interpretability and performance. This work contributes a clinically aligned and interpretable framework for automated kidney stone classification and supports the integration of XAI into nephrological diagnostic workflows.
- An explainable autoencoder integrating regression and classification trees for anomaly detection(Instituto Tecnológico y de Estudios Superiores de Monterrey, 2025) Caballero Dominguez, Zoe; Raúl Monroy Borja, Raúl; mtyahinojosa, emipsanchez; Graff Guerrero, Mario; García Ceja, Enrique Alejandro; González Mendoza, Miguel; Escuela de Ingeniería y Ciencias; Campus Estado de México; Medina Pérez, Miguel AngelAnomaly detection, or outlier detection, is a critical field since anomalies are data points that deviate from normal patterns and are used to represent critical information, such as fraud, diseases, or cyber-attacks. These applications are considered high-risk scenarios which involve high-stakes decision-making. Therefore, understanding the reasoning behind machine learning models used in this area has become an essential requirement. Despite its growing importance, explainable outlier detection remains a challenge since improving model accuracy while maintaining explainability creates a significant trade-off. Furthermore, anomaly detection models are mostly designed for one type of data, either numerical or categorical. This represents a disadvantage when both data types are present in the dataset's attributes, as real-world applications often contain, since transforming categorical values to numerical ones, or vice-versa, can produce information loss and reduced performance. In this thesis, we seek to address both challenges by proposing a novel explainable semi-supervised anomaly detection model that integrates classification and regression trees into an autoencoder architecture. We named our proposal: Explainable Outlier Tree-based Encoder (EOTE). EOTE is able to detect anomalies by creating a reconstruction of the input instance based on the relationships between attributes learned from normal samples. The harder it is for EOTE to reconstruct the instance correctly, the higher the probability of being an outlier is given to the instance. We evaluate EOTE against 12 anomaly detection and one-class classifiers across 110 datasets containing attributes of one data type (numerical or nominal) and a mix of both. Our experiments reveal that EOTE is one of the top-performing algorithms at detecting outliers in datasets with only numerical and nominal attributes, as well as datasets with mixed data attributes. Therefore, without sacrificing performance, EOTE is capable of producing interpretable outputs for its classification. This combination makes EOTE a suitable classifier for anomaly detection in high-risk applications.
- An explainable artificial intelligence model for detecting xenophobic tweets(Instituto Tecnológico y de Estudios Superiores de Monterrey, 2021-11-28) Perez Landa, Gabriel Ichcanziho; TORRES HUITZIL, CESAR; 121431; Loyola Gonzáles, Octavio; emipsanchez; Torres Huitzil, César; López Monroy, Adrián Pastor; School of Engineering and Sciences; Campus Estado de México; Medina Pérez, Miguel AngelXenophobia is hate speech characterized by hatred, fear, or rejection of people from other communities. The growth of the internet worldwide has resulted in the rapid expansion in the use of social networks. The excessive use of social networks has led to hate speech, primarily due to the feeling of pseudo-anonymity that social networks provide. On occasions, the violent behavior present in the violent courses of social networks breaks the barriers of the internet and becomes an act of physical violence in real life. Research on the classification of xenophobia in social networks is a very recent problem, and that is why there are currently very few databases available for the classification of xenophobia. That is why we created a new Twitter xenophobia database, whose main feature is to have been labeled by experts in international relations, psychology, and sociology. This database has 10,073 manually tagged Tweets, of which 2,017 belong to the xenophobia class. An extensive effort is currently being made to migrate the unexplained machine learning classifiers known as black-box to new explainable artificial intelligence (XAI) models that allow the interpretability and understanding of the classification. We decided to introduce an XAI model based on contrast patterns jointly with a new interpretable feature representation based on syntactic, semantic, and sentiment analysis to understand the characteristics of xenophobic posts on social networks. The new interpretable feature representation has 38 different characteristics, including information on feelings, emotions, intentions, syntactic characteristics, and keywords related to xenophobia. Finally, our results show that our new feature representation in conjunction with a classifier based on contrast patterns obtained an average of 0.86 and 0.77 points in AUC and F1 scores, respectively. Experiments show that XAI models can achieve classification results equal to or better than unexplained models. Furthermore, creating a new interpretable feature representation based on emotions, feelings, intentions, and keywords related to xenophobia allowed us to extract a set of the most used words in xenophobic posts. The interpretable feature representation, jointly with an XAI contrast pattern-based model, allowed us to extract a set of patterns describing the xenophobic and non-xenophobic classes. These patterns are presented in a language close to the experts and contextualize words associated with xenophobia using emotions, intentions, and feelings.

