Ciencias Exactas y Ciencias de la Salud
Permanent URI for this collectionhttps://hdl.handle.net/11285/551039
Pertenecen a esta colección Tesis y Trabajos de grado de las Maestrías correspondientes a las Escuelas de Ingeniería y Ciencias así como a Medicina y Ciencias de la Salud.
Browse
Search Results
- Deep learning applied to the detection of traffic signs in embedded devices(Instituto Tecnológico y de Estudios Superiores de Monterrey, 2024-06) Rojas García, Javier; Fuentes Aguilar, Rita Quetziquel; emimmayorquin; Morales Vargas, Eduardo; Izquierdo Reyes, Javier; School of Engineering and Sciences; Campus Eugenio Garza SadaComputer vision is an integral component of autonomous vehicle systems, enabling tasks such as obstacle detection, road infrastructure recognition, and pedestrian identification. Autonomous agents must perceive their environment to make informed decisions and plan and control actuators to achieve predefined goals, such as navigating from point A to B without incidents. In recent years, there has been growing interest in developing Advanced Driving Assistance Systems like lane-keeping assistants, emergency braking mechanisms, and traffic sign detection systems. This growth is driven by advancements in Deep Learning techniques for image processing, enhanced hardware capabilities for edge computing, and the numerous benefits promised by autonomous vehicles. This work investigates the performance of three recent and popular object detectors from the YOLO series (YOLOv7, YOLOv8, and YOLOv9) on a custom dataset to identify the optimal architecture for TSD. The objective is to optimize and embed the best-performing model on the Jetson Orin AGX platform to achieve real-time performance. The custom dataset is derived from the Mapillary Traffic Sign Detection dataset, a large-scale, diverse, and publicly available resource. Detection of traffic signs that could potentially impact the longitudinal control of the vehicle is focused on. Results indicate that YOLOv7 offers the best balance between mean Average Precision and inference speed, with optimized versions running at over 55 frames per second on the embedded platform, surpassing by ample margin what is often considered real-time (30 FPS). Additionally, this work provides a working system for real-time traffic sign detection that could be used to alert unattentive drivers and contribute to reducing car accidents. Future work will explore further optimization techniques such as quantization-aware training, conduct more thorough real-life scenario testing, and investigate other architectures, including vision transformers and attention mechanisms, among other proposed improvements.
- Deep learning applied to the detection of traffic signs in embedded devices(Instituto Tecnológico y de Estudios Superiores de Monterrey, 2024-06) Rojas García, Javier; Fuentes Aguilar, Rita Quetziquel; emimmayorquin; Morales Vargas, Eduardo; Izquierdo Reyes, Javier; School of Engineering and Sciences; Campus Eugenio Garza SadaComputer vision is an integral component of autonomous vehicle systems, enabling tasks such as obstacle detection, road infrastructure recognition, and pedestrian identification. Autonomous agents must perceive their environment to make informed decisions and plan and control actuators to achieve predefined goals, such as navigating from point A to B without incidents. In recent years, there has been growing interest in developing Advanced Driving Assistance Systems like lane-keeping assistants, emergency braking mechanisms, and traffic sign detection systems. This growth is driven by advancements in Deep Learning techniques for image processing, enhanced hardware capabilities for edge computing, and the numerous benefits promised by autonomous vehicles. This work investigates the performance of three recent and popular object detectors from the YOLO series (YOLOv7, YOLOv8, and YOLOv9) on a custom dataset to identify the optimal architecture for TSD. The objective is to optimize and embed the best-performing model on the Jetson Orin AGX platform to achieve real-time performance. The custom dataset is derived from the Mapillary Traffic Sign Detection dataset, a large-scale, diverse, and publicly available resource. Detection of traffic signs that could potentially impact the longitudinal control of the vehicle is focused on. Results indicate that YOLOv7 offers the best balance between mean Average Precision and inference speed, with optimized versions running at over 55 frames per second on the embedded platform, surpassing by ample margin what is often considered real-time (30 FPS). Additionally, this work provides a working system for real-time traffic sign detection that could be used to alert unattentive drivers and contribute to reducing car accidents. Future work will explore further optimization techniques such as quantization-aware training, conduct more thorough real-life scenario testing, and investigate other architectures, including vision transformers and attention mechanisms, among other proposed improvements.
- EfficientDet and fuzzy logic for an emergency brake driver assistant system based on traffic lights using a Jetson TX2 and a ZED stereo camera(Instituto Tecnológico y de Estudios Superiores de Monterrey, 2022-04) García Escalante, Andrés Ricardo; FUENTES AGUILAR, RITA QUETZIQUEL; 229297; Fuentes Aguilar, Rita Quetziquel; puelquio, emipsanchez; Terashima Marín, Hugo; Falcón Morales, Luis Eduardo; Álvarez González, Rodolfo Rubén; Escuela de Ingeniería y Ciencias; Campus Monterrey; Carbajal, Oscar Eleno EspinosaA study developed by the University of West Virginia analyzed the vehicle collisions, these occur due to the slow reaction time (RT) of humans. The study involved human RT under specific conditions, they found out that fully aware drivers have an estimated RT between 0.70 to 0.075 seconds, unexpected but normal situations like a lead car brake’s lights, is 1.25 seconds, and for surprising events is estimated to be around 1.50 seconds. Therefore, the presented work provides a solution to implement an Advanced Driver Assistant System (ADAS) level 1 called Emergency Brake Driver Assistant System based on Traffic Lights (EBDASTL) using a Jetson TX2 and a ZED Stereo camera to detect Traffic Light States (TLSs), estimate the distance to a Traffic Light (TL), and perform a brake decision based on the TLS and TLD that can have a better response time than human RT in surprising events. The main contribution of this research project is the implementation of a single ADAS that has three stages. The Traffic Light State Detection Model (TLSDM) stage using EfficientDet D0. The Traffic Light Distance (TLD) stage using a ZED Stereo camera, and the Traffic Light Decision-Making (TLDM) stage using Fuzzy Logic. Up to date there is not a related work that have the three stages. The second main contribution is the on Road test performed in Queretaro Mexico, where all the components of the EBDASTL have been mounted in a car and tested in a real-world scenario. The experiment consisted of detecting red and green TLSs at six different positions (5, 7, 9, 11, 13, and 15 meters from the TL). The TLSDM achieved a mean Average Precision of 96% for distances lower than 13 meters, and 89.50% for 15 meters. The TLD achieved an overall Root Mean Squared Error (RMSE) of 0.84 for all distances. The TLDM provided a smooth brake profile. Finally, the EBDASTL provided a response time of 0.23 seconds.

