Loading [a11y]/accessibility-menu.js
Path Following Optimization for an Underactuated USV Using Smoothly-Convergent Deep Reinforcement Learning | IEEE Journals & Magazine | IEEE Xplore

Path Following Optimization for an Underactuated USV Using Smoothly-Convergent Deep Reinforcement Learning


Abstract:

This paper aims to solve the path following problem for an underactuated unmanned-surface-vessel (USV) based on deep reinforcement learning (DRL). A smoothly-convergent D...Show More

Abstract:

This paper aims to solve the path following problem for an underactuated unmanned-surface-vessel (USV) based on deep reinforcement learning (DRL). A smoothly-convergent DRL (SCDRL) method is proposed based on the deep Q network (DQN) and reinforcement learning. In this new method, an improved DQN structure was developed as a decision-making network to reduce the complexity of the control law for the path following of a three-degree of freedom USV model. An exploring function was proposed based on the adaptive gradient descent to extract the training knowledge for the DQN from the empirical data. In addition, a new reward function was designed to evaluate the output decisions of the DQN, and hence, to reinforce the decision-making network in controlling the USV path following. Numerical simulations were conducted to evaluate the performance of the proposed method. The analysis results demonstrate that the proposed SCDRL converges more smoothly than the traditional deep Q learning while the path following error of the SCDRL is comparable to existing methods. Thanks to good usability and generality of the proposed method for USV path following, it can be applied to practical applications.
Published in: IEEE Transactions on Intelligent Transportation Systems ( Volume: 22, Issue: 10, October 2021)
Page(s): 6208 - 6220
Date of Publication: 05 May 2020

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.