Loading [a11y]/accessibility-menu.js
Neural-Based Predefined-Time Distributed Optimization of High-Order Nonlinear Multiagent Systems | IEEE Journals & Magazine | IEEE Xplore

Neural-Based Predefined-Time Distributed Optimization of High-Order Nonlinear Multiagent Systems


Impact Statement:Due to the urgent demands of large-scale network optimization, the distributed optimization problem for MASs has gained wide attention in the intelligent control field. H...Show More

Abstract:

This article addresses a predefined-time distributed optimization problem for high-order nonlinear multiagent systems (MASs). First, by means of a distributed proportiona...Show More
Impact Statement:
Due to the urgent demands of large-scale network optimization, the distributed optimization problem for MASs has gained wide attention in the intelligent control field. However, the problem of predefined-time distributed optimization for high-order nonlinear MASs remains open. Dealing with the distributed optimization problem for high-order MASs usually requires high-order derivatives of the cost function, resulting in computational burden. Convergence speed is regarded as one of the important indicators of control performance, while current schemes only handle distributed optimization problems under finite/fixed-time control, and the predefined-time distributed optimization problem is rarely considered at present. In addition, complex operating environments incur unknown nonlinearities, hence this article attempts to provide a simple predefined-time distributed optimization algorithm and accurately approximate the unknown nonlinearities using NN.

Abstract:

This article addresses a predefined-time distributed optimization problem for high-order nonlinear multiagent systems (MASs). First, by means of a distributed proportional integration (PI) protocol, a reference model is constructed to evaluate the global optimal solution for MASs. Then, the resulting measurement is fed into a prefilter to produce a reconstructed optimal reference signal and its high-order derivatives. Instead of designing the updated law with ó-modification to deal with unknown nonlinearities, a gradient descent algorithm is developed to train the weights of neural networks (NNs) to achieve higher function approximation accuracy. Moreover, in the framework of prefiltering, an NN-based predefined-time control strategy is built using the backstepping technique to guarantee that all agents’ outputs can reach optimal consensus in predefined time. Finally, simulation examples validate the effectiveness of the presented approach.
Published in: IEEE Transactions on Artificial Intelligence ( Volume: 5, Issue: 6, June 2024)
Page(s): 3174 - 3183
Date of Publication: 18 December 2023
Electronic ISSN: 2691-4581

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.