Skip to main content

Advertisement

Log in

Multi-ant colony algorithm based on the Stackelberg game and incremental learning

  • Optimization
  • Published:
Soft Computing Aims and scope Submit manuscript

Abstract

To address the difficulties of slow convergence and inadequate accuracy of traditional ant colony algorithms in solving the traveling salesman problem (TSP), we propose a multi-ant colony algorithm based on the Stackelberg game and incremental learning (SGIACO). We incorporate the Stackelberg game strategy across multiple colonies, where the leader guides the follower to optimize population co-evolution, ensuring a balance between convergence and diversity of the algorithm. Furthermore, we propose an incremental learning strategy that enhances efficient paths on the public routes and ignores inefficient ones, thus accelerating the convergence speed of the algorithm. Finally, when the algorithm stagnates, a pheromone balance mechanism is implemented to help the ants escape from local optima. We conducted experiments on 23 TSP instances to validate the algorithm's performance and compare it to ACS, MMAS, as well as other recent algorithms. In addition, non-parametric tests were conducted for comprehensive performance analysis. Moreover, we verified the feasibility of SGIACO through simulations in robot path planning scenarios. The experimental results show that SGIACO has good convergence and accuracy, which is competitive with other algorithms. Future research aims to scale SGIACO for larger real-world applications, enhancing its adaptability and scalability.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Explore related subjects

Discover the latest articles and news from researchers in related subjects, suggested using machine learning.

Data availability

Enquiries about data availability should be directed to the authors.

References

Download references

Funding

This work was supported in part by the National Natural Science Foundation of China under Grant 61673258, Grant 61075115 and in part by the Shanghai Natural Science Foundation under Grant 19ZR1421600.

Author information

Authors and Affiliations

Authors

Contributions

The paper and the code of the algorithm were written by Qihuan Wu. Suggestions for revising the manuscript were given by Xiaoming You. The material for the experiment was prepared by Sheng Liu. The final manuscript is approved by all authors.

Corresponding author

Correspondence to Xiaoming You.

Ethics declarations

Conflict of interest

All the authors declare that they have no conflict of interest.

Ethical approval

This article does not contain any studies with human participants or animals performed by any of the authors.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wu, Q., You, X. & Liu, S. Multi-ant colony algorithm based on the Stackelberg game and incremental learning. Soft Comput 29, 2107–2128 (2025). https://doi.org/10.1007/s00500-025-10469-3

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00500-025-10469-3

Keywords