Skip to main content
Log in

Fractional-order gradient descent with momentum for RBF neural network-based AIS trajectory restoration

  • Foundations
  • Published:
Soft Computing Aims and scope Submit manuscript

Abstract

In order to guarantee the integrity and accuracy of ship Automatic Identification System (AIS) data, and prevent maritime traffic accidents and make scientific decisions, a fractional-order gradient with momentum RBF neural network (FOGDM-RBF) is proposed. Fractional-order calculus is applied to gradient descent with momentum algorithm for training neural network. The convergence of the proposed algorithm is proved. The AIS data from Danish and Xiamen port are chosen to test the proposed algorithm. The results show FOGDM-RBF can repair the ship’s AIS trajectories with satisfying learning speed and interpolation accuracy. Comparisons show the proposed algorithm has lower training error than gradient descent, stochastic gradient descent and gradient descent with momentum. Compared with gradient descent, gradient descent with momentum, this algorithm has the advantages of better interpolation performance, higher accuracy, better generalization performance, and is not easy to fall into local optimum.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

References

  • Al-Sharif ZA, Al-Saleh MI, Alawneh LM, Jararweh YI (2020) Live forensics of software attacks on cyber-physical systems. Fut Gener Comput Syst 108:1217–1229

    Article  Google Scholar 

  • Din S, Paul A, Ahmad A, Gupta BB, Rho AS (2018) Service orchestration of optimizing continuous features in industrial surveillance using big data based fog-enabled internet of things. IEEE Access 6:21582–21591

    Article  Google Scholar 

  • Dozat T (2016) Incorporating nesterov momentum into adam. Workshop track—ICLR 2016

  • Duchi J, Hazan E, Singer Y (2011) Adaptive subgradient methods for online learning and stochastic optimization. J Mach Learn Res 12:2121–2159

    MathSciNet  MATH  Google Scholar 

  • Esmaeilbeigi M, Chatrabgoun O, Cheraghi M (2018) Fractional Hermite interpolation using RBFs in high dimensions over irregular domains with application. J Comput Phys 375:1091–1120

    Article  MathSciNet  Google Scholar 

  • Feng RZ, Peng S (2018) Quasi-interpolation scheme for arbitrary dimensional scattered data approximation based on natural neighbors and RBF interpolation. J Comput Appl 329:95–105

    Article  MathSciNet  Google Scholar 

  • Izquierdo D, Silanes MCLD, Parra MC, Torrens JJ (2014) CS-RBF interpolation of surfaces with vertical faults from scattered data. Math Comput Simul 102:11–23

    Article  MathSciNet  Google Scholar 

  • Jahanbakhti H (2020) A novel fractional-order neural network for model reduction of large-scale systems with fractional-order nonlinear structure. Soft Comput. https://doi.org/10.1007/s00500-020-04763-5

    Article  Google Scholar 

  • Kedward L, Allen CB, Rendall TCS (2017) Efficient and exact mesh deformation using multiscale RBF interpolation. J Comput Phys 345:732–751

    Article  MathSciNet  Google Scholar 

  • Khan S, Naseem I, Malik MA (2018) A fractional gradient descent-based RBF neural network. Circuits Syst Signal Process 37:5311–5332

    Article  MathSciNet  Google Scholar 

  • Khan ZA, Zubair S, Alquhayz H, Azeem M, Ditta AA (2019) Design of momentum fractional stochastic gradient descent for recommender systems. IEEE Access 7:179575–179590

    Article  Google Scholar 

  • Kingma DP, Ba JL (2015) Adam: a method for stochastic optimization. In: International Conference on Learning Representations 2015

  • Kobayashi M (2017) Gradient descent learning for quaternionic Hopfield neural networks. Neurocomputing 260:174–179

    Article  Google Scholar 

  • Li DM, Deng LB, Gupta BB, Wang HX, Choi C (2019) A novel CNN based security guaranteed image watermarking generation scenario for smart city applications. Inf Sci 479:432–447

    Article  Google Scholar 

  • Liu W, Chen L, Chen YF, Zhang WY (2020) Accelerating federated learning via momentum gradient descent. IEEE Trans Parallel Distrib Syst 31(8):1754–1766

    Article  Google Scholar 

  • Mousavi Y, Alfi A (2018) Fractional calculus-based firefly algorithm applied to parameter estimation of chaotic systems. Chaos, Solitons Fractals 114:202–215

    Article  Google Scholar 

  • Romani L, Rossini M, Schenone D (2019) Edge detection methods based on RBF interpolation. J Comput Appl Math 349:532–547

    Article  MathSciNet  Google Scholar 

  • Rumelhart DE, McClelland JL (1986) Parallel distributed processing: explorations in the microstructure of cognition, vol 1, Foundations, MIT Press

  • Stergiou C, Psannis KE (2020) Recent advances delivered in mobile cloud computing's security and management challenges. In: Modern principles, practices, and algorithms for cloud security. https://doi.org/10.4018/978-1-7998-1082-7.ch002

  • Stergiou C, Psannis KE, Gupta BB, Ishibashi Y (2018) Security, privacy & efficiency of sustainable cloud computing for big data & IoT. Sustain Comput Inform Syst 19:174–184

    Google Scholar 

  • Sutherland WA (1975) Introduction to Metric and Topological Spaces, vol 23. Oxford University Press, Oxford

    MATH  Google Scholar 

  • Tewari A, Gupta BB (2020) Security, privacy and trust of different layers in Internet-of-Things (IoTs) framework. Fut Gener Comput Syst 108:909–920

    Article  Google Scholar 

  • Utomo D (2017) Stock price prediction using back propagation neural network based on gradient descent with momentum and adaptive learning rate. J Internet Bank Commerce 22(3):1–16

    MathSciNet  Google Scholar 

  • Wang LN, Yang Y, Min RQ, Chakradharb S (2017a) Accelerating deep neural network training with inconsistent stochastic gradient descent. Neural Netw 93:219–229

    Article  Google Scholar 

  • Wang J, Wen YQ, Gou Y, Ye ZY, Chen H (2017b) Fractional-order gradient descent learning of BP neural networks with Caputo derivative. Neural Netw 89:19–30

    Article  Google Scholar 

  • Wu W, Zhang NM, Li ZX, Li L, Liu Y (2008) Convergence of gradient method with momentum for back-propagation neural networks. J Comput Math 26(4):613–623

    MathSciNet  MATH  Google Scholar 

  • Yang GL, Zhang BJ, Sang ZY, Wang J, Chen H (2017) A caputo-type fractional-order gradient descent learning of BP neural networks. In: 14th international symposium, ISNN, Sapporo, Hakodate, and Muroran, Hokkaido, Japan, June 21–26

  • Yin PH, Zhang S, Lyu JC, Osher S, Qi YY, Xin J (2019) Blended coarse gradient descent for full quantization of deep neural networks. Res Math Sci 6:14

    Article  MathSciNet  Google Scholar 

  • Zeiler MD (2012). Adadelta: an adaptive learning rate method. https://arxiv.org/abs/1212.5701

  • Zhang LY, Zhang PC, Yang J, Li J, Gui ZG (2019) Aperture shape generation based on gradient descent with momentum. IEEE Access 7:157623–157632

    Article  Google Scholar 

Download references

Acknowledgements

The author thanks the reviewers for valuable comments that helped improve the clarity of presentation of this paper. The author is grateful to editors for valuable guidance and help.

Funding

The work is supported by the National Natural Science Foundation of China (51879119), Natural Science Foundation of Fujian Province (2018J05085), the high level research and cultivation fund of transportation engineering discipline in Jimei University (202003).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Han Xue.

Ethics declarations

Conflict of interest

The author declares that they have no financial and personal relationships with other people or organizations that can inappropriately influence their work. There is no professional or other personal interest of any nature or kind in any product, service and company that could be construed as influencing the position presented in the manuscript.

Additional information

Communicated by A. Di Nola.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix: Matlab program of Caputo fractional differential

Appendix: Matlab program of Caputo fractional differential

figure a

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xue, H. Fractional-order gradient descent with momentum for RBF neural network-based AIS trajectory restoration. Soft Comput 25, 869–882 (2021). https://doi.org/10.1007/s00500-020-05484-5

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00500-020-05484-5

Keywords

Navigation