Abstract:
This paper introduces an online reinforcement learning scheme with exploration for distributed approximate optimal control of uncertain nonlinear interconnected system. T...Show MoreMetadata
Abstract:
This paper introduces an online reinforcement learning scheme with exploration for distributed approximate optimal control of uncertain nonlinear interconnected system. The subsystem, interconnection dynamics and input gain matrix are approximated using neural network (NN) identifiers with event-based state feedback. A second NN is designed at each subsystem to construct the mapping of states to future reward prediction via reinforcement signals with which a sequence of approximately optimal distributed control actions are generated. Since the identifiers and the controllers at each subsystem requires the local and other subsystem state vector with non-zero interconnections, a decentralized event-triggering mechanism using Lyapunov theory is developed to dynamically determine the feedback instants so as to reduce the communication overhead. Further, a novel strategy to incorporate exploration in the online control framework using identifiers is proposed to minimize the overall cost during the learning phase. The effects of network delay are discussed and finally, simulation results are presented to verify the effectiveness of the proposed controller.
Date of Conference: 14-19 May 2017
Date Added to IEEE Xplore: 03 July 2017
ISBN Information:
Electronic ISSN: 2161-4407