Ciencias Exactas y Ciencias de la Salud

Permanent URI for this collectionhttps://hdl.handle.net/11285/551014

Pertenecen a esta colección Tesis y Trabajos de grado de los Doctorados correspondientes a las Escuelas de Ingeniería y Ciencias así como a Medicina y Ciencias de la Salud.

Browse

Search Results

Now showing 1 - 1 of 1
  • Tesis de doctorado
    A generalist reinforcement learning agent for compressing multiple convolutional neural networks
    (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2024-12-11) González Sahagún, Gabriel; Conant ablos, Santiago Enrique; emipsanchez; Ortíz Bayliss, José Carlos; Cruz Duarte, Jorge Mario; Gutiérrez Rodríguez, Andrés Eduardo; School of Engineering and Sciences; Campus Monterrey
    Deep Learning has achieved state-of-the-art accuracy in multiple fields. A common practice in computer vision is to reuse a pre-trained model for a completely different dataset of the same type of task, a process known as transfer learning, which reduces training time by reusing the filters of the convolutional layers. However, while transfer learning can reduce training time, the model might overestimate the number of parameters needed for the new dataset. As models now achieve near-human performance or better, there is a growing need to reduce their size to facilitate deployment on devices with limited computational resources. Various compression techniques have been proposed to address this issue, but their effectiveness varies depending on hyperparameters. To navigate these options, researchers have worked on automating model compression. Some have proposed using reinforcement learning to teach a deep learning model how to compress another deep learning model. This study compares multiple approaches for automating the compression of convolutional neural networks and proposes a method for training a reinforcement learning agent that works across multiple datasets without the need for transfer learning. The agents were tested using leaveone- out cross-validation, learning to compress a set of LeNet-5 models and testing on another LeNet-5 model with different parameters. The metrics used to evaluate these solutions were accuracy loss and the number of parameters of the compressed model. The agents suggested compression schemes that were on or near the Pareto front for these metrics. Furthermore, the models were compressed by more than 80% with minimal accuracy loss in most cases. The significance of these results is that by escalating this methodology for larger models and datasets, an AI assistant for model compression similar to ChatGPT can be developed, potentially revolutionizing model compression practices and enabling advanced deployments in resource-constrained environments.
En caso de no especificar algo distinto, estos materiales son compartidos bajo los siguientes términos: Atribución-No comercial-No derivadas CC BY-NC-ND http://www.creativecommons.mx/#licencias
logo

El usuario tiene la obligación de utilizar los servicios y contenidos proporcionados por la Universidad, en particular, los impresos y recursos electrónicos, de conformidad con la legislación vigente y los principios de buena fe y en general usos aceptados, sin contravenir con su realización el orden público, especialmente, en el caso en que, para el adecuado desempeño de su actividad, necesita reproducir, distribuir, comunicar y/o poner a disposición, fragmentos de obras impresas o susceptibles de estar en formato analógico o digital, ya sea en soporte papel o electrónico. Ley 23/2006, de 7 de julio, por la que se modifica el texto revisado de la Ley de Propiedad Intelectual, aprobado

DSpace software copyright © 2002-2026

Licencia