Ciencias Exactas y Ciencias de la Salud
Permanent URI for this collectionhttps://hdl.handle.net/11285/551039
Pertenecen a esta colección Tesis y Trabajos de grado de las Maestrías correspondientes a las Escuelas de Ingeniería y Ciencias así como a Medicina y Ciencias de la Salud.
Browse
Search Results
- Reinforcement learning for an attitude control algorithm for racing quadcopters(Instituto Tecnológico y de Estudios Superiores de Monterrey, 2022-06-15) Nakasone Nakamurakari, Shun Mauricio; BUSTAMANTE BELLO, MARTIN ROGELIO; 58810; Bustamante Bello, Martín Rogelio; puemcuervo; Navarro Durán, David; School of Engineering and Sciences; Campus Ciudad de México; Galuzzi Aguilera, RenatoFrom its first conception to its wide commercial distribution, Unmanned Aerial Vehicle (UAV)’s have always presented an interesting control problem as their dynamics are not as simple to model and present a non-linear behavior. These vehicles have improved as the technology in these devices has been developed reaching commercial and leisure use in everyday life. Out of the many applications for these vehicles, one that has been rising in popularity is drone racing. As technology improves, racing quadcopters have also improved reaching capabilities never seen before in flying vehicles. Though hardware and performance have improved throughout the drone racing industry, something that has been lacking, in a way, is better and more robust control algorithms. In this thesis, a new control strategy based on Reinforcment Learning (RL) is presented in order to achieve better performance in attitude control for racing quadcopters. For this process, two different plants were developed to fulfill, a) the training process needs with a simplified dynamics model and b) a higher fidelity Multibody model to validate the resulting controller. By using Proximal Policy Optimization (PPO), the agent is trained via a reward function and interaction with the environment. This dissertation presents a different approach on how to determine a reward function such that the agent trained learns in a more effective and faster way. The control algorithm obtained from the training process is simulated and tested against the most common attitude control algorithm used in drone races (Proportional Integral Derivative (PID) control), as well as its ability to reject noise in the state signals and external disturbances from the environment. Results from agents trained with and without these disturbances are also presented. The resulting control policies were comparable to the PID controller and even outperformed this control strategy in noise rejection and robustness to external disturbances.