MAB-ABC: A Multi-Armed Bandit-based Framework for Adaptive Strategy Selection in the Artificial Bee Colony Algorithm
Abstract
The Artificial Bee Colony (ABC) algorithm is a highly competitive and effective method for solving complex optimization problems. However, like other population-based algorithms, it suffers from an imbalance between global exploration and local exploitation, which often leads to slow convergence and suboptimal performance. To address these limitations, and inspired by multi-armed bandits (MAB), we propose a novel ABC variant with an adaptive multi-strategy selection mechanism, termed MAB-ABC. In this approach, we incorporate a Lévy flight-based exploration strategy during the Employed Bee Phase to enhance global search and a micro-scale exploitation strategy during the Onlooker Bee Phase to improve local refinement. The MAB framework is used to adaptively select the most suitable strategy based on its learned historical performance. Specifically, when a strategy yields a high-quality solution, its corresponding weight is increased as a reward. This mechanism allows the algorithm to prioritize and allocate more computational resources to the most effective strategies over time. To evaluate the performance of our proposed MAB-ABC, we conducted extensive experiments on the renowned CEC 2014 benchmark suite. A comparative analysis against eight other optimization algorithms, including several state-of-the-art ABC variants, demonstrates that MAB-ABC outperforms its competitors, showing a superior balance between exploration and exploitation and achieving higher solution quality.
Related articles
Related articles are currently not available for this article.