View planning for three-dimensional environment reconstruction using the Next Best View method
Citation
Share
Abstract
This study was made with the purpose of understanding the impact of the objective functionand optimization methods on the Next Best View problem, which consists in finding the next position that the sensor or camera needs to take to scan an object or scenery in its totality. A simulated 5-Degree-of-Freedom mobile robot with a mounted simulated range sensor was used on a Virtual Reality Modeling Language environment, and the space discretization was made using a voxel map. For the objective function, two main factors were included: an area factor to make sure that the image taken by the sensor provides the best possible information, and a motion factor made up of distance and energy sub-factors to reduce the resources used by the robot, making multiple experiments on a laboratory scene to determine their best arrangement on the final objective function. Global optimization tasks such as a backstepping technique to escape local minima and a dynamic change in the objective function were implemented. The retrievement of the scene was made on an iterative process, with each iteration needing an optimization process for which three different methods were tested: Nelder-Mead, an Evolution Strategy, and Simulated Annealing. A set of experiments comparing the three methods in computational time and retrievement efficiency were made on three different environments with increasing difficulty to test their repeatability, with them being a laboratory model, a room with a cube and a pyramid inside it, and a study room with multiple furniture and windows.
Description
https://orcid.org/0000-0002-1113-929X