A hybrid multi-objective optimization approach to neural architecture search for super resolution image restoration
Citation
Share
Abstract
Super-resolution image restoration (SRIR) aims to reconstruct a high-resolution image from a degraded low-resolution input. It plays a key role in domains such as surveillance, medical imaging, and content creation. While recent approaches rely on deep neural networks, most architectures remain handcrafted through laborious and error-prone trial-and-error processes. Neural Architecture Search (NAS) seeks to automate the design of deep models, balancing predictive accuracy with constraints like latency and memory usage. Formulating NAS as a bi-level, multi-objective optimization problem highlights these trade-offs and motivates the development of flexible search spaces and strategies that prioritize both performance and efficiency.Prior NAS efforts for SRIR frequently rely on fixed cell structures, scalarized objectives, or computationally intensive pipelines, limiting their practicality on resourceconstrained platforms. Benchmarking shows that such methods often struggle to jointly minimize parameters, FLOPs, and inference time without compromising image reconstruction quality.We propose the Branching Architecture Search Space (BASS), a layer-based, multidepth, multi-branch design that supports dynamic selection, allocation, and repetition of operations. To explore BASS, we introduce a hybrid NAS framework that combines NSGA-III with hill-climbing refinements, guided by SynFlow as a zero-cost trainability estimator. The hybrid approach achieves superior trade-offs in trainability, parameter efficiency, and computational cost when given the same number of function evaluations as vanilla NSGA-III—and reaches comparable Pareto-front approximations with substantially fewer evaluations. The resulting solutions offer enhanced model quality, reduced complexity, and improved deployment suitability for real-world SRIR tasks.Extensive search experiments yield a diverse Pareto front of candidate architectures. Representative designs are fully trained on DIV2K and evaluated across standard SR benchmarks (Set5, Set14, BSD100, Urban100) at →2, →3, and →4 upscales. Balanced models achieve competitive PSNR while operating with significantly fewer parameters and FLOPs than heavyweight baselines. The hybrid search demonstrates faster convergence and improved trade-off resolution compared to single-strategy alternatives, as supported by Bayesian statistical analysis.The combination of BASS and hybrid NSGA-III enables the discovery of SRIR architectures that effectively balance accuracy and resource constraints. This approach facilitates deployment on embedded and real-time systems and offers a generalizable framework for resource-aware NAS across other dense prediction tasks.
Description
https://orcid.org/0000-0002-8561-9886