Skip to main content

Research on HP Model Optimization Method Based on Reinforcement Learning

  • Conference paper
  • First Online:
Intelligent Computing Theories and Application (ICIC 2019)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 11644))

Included in the following conference series:

  • 1395 Accesses

Abstract

Protein structure prediction is an important factor in the area of bioinformatics. Predicting the two-dimensional structure of proteins based on the hydrophobic polarity model (HP model) is a typical non-deterministic polynomial(NP)-hard problem. Currently, HP model optimization methods include the greedy algorithm, particle swarm optimization, genetic algorithm, ant colony algorithm and the Monte-Carlo simulation method. However, the robustness of these methods are not sufficient, and it is easy to fall into a local optimum. Therefore, a HP model optimization method, based on reinforcement learning was proposed. In the full state space, a reward function based on energy function was designed and a rigid overlap detection rule was introduced. By using the characteristics of the continuous Markov optimal decision and maximizing global cumulative return, the global evolutionary relationship in biological sequences was fully exploited, and effective and stable predictions were retrieved. Eight classical sequences from publications and Uniref50 were selected as experimental objects. The robustness, convergence and running time were compared with the greedy algorithm and particle swarm optimization algorithm, respectively. Both reinforcement method and swarm optimization method can find all the lowest energy structures for these eight sequences, while the greedy algorithm only detects 62.5%. Compared with particle swarm optimization, the running time of the reinforcement method is 63.9% lower than that of particle swarm optimization.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Márquez-Chamorro, A.E., Asencio-Cortés, G., Santiesteban-Toca, C.E., et al.: Soft computing methods for the prediction of protein tertiary structures: a survey. Appl. Soft Comput. 35(C), 398–410 (2015)

    Article  Google Scholar 

  2. Qian, J., Xin, J., Lee, S.J., et al.: Protein secondary structure prediction: a survey of the state of the art. J. Mol. Graph. Model. 76, 379–402 (2017)

    Article  Google Scholar 

  3. Huang, Y.A., You, Z.H., Chen, X., et al.: Sequence-based prediction of protein-protein interactions using weighted sparse representation model combined with global encoding. BMC Bioinformatics 17(1), 1–11 (2016)

    Article  Google Scholar 

  4. Hwang, K.S., Jiang, W.C., Chen, Y.J.: Model learning and knowledge sharing for a multiagent system with Dyna-Q learning. IEEE Trans. Cybern. 45(5), 964–976 (2015)

    Google Scholar 

  5. Liu, Q., Zhai, J.W., Zhang, Z.Z., et al.: Summary of deep reinforcement learning. J. Comput. 1, 1–27 (2018)

    Google Scholar 

  6. Hu, L.B., Chen, J.P., Fu, Q.M., et al.: An adaptive learning adaptive control method for building energy saving. Comput. Eng. Appl. 53(21), 239–246 (2017)

    Google Scholar 

  7. Liu, Z.B., Zeng, X.Q., Liu, H.Y., et al.: Two-layer heuristic reinforcement learning method based on BP neural network. Comput. Res. Dev. 52(3), 579–587 (2015)

    Google Scholar 

  8. Liu, X.W., Gao, C.M.: Combining behavior tree and Q-learning to optimize agent behavior decision in UT2004. Comput. Eng. Appl. 52(3), 113–118 (2016)

    Article  Google Scholar 

  9. Zhang, L., Kong, L., Han, X., et al.: Structural class prediction of protein using novel feature extraction method from chaos game representation of predicted secondary structure. J. Theor. Biol. 400, 1–10 (2016)

    Article  Google Scholar 

  10. Yu, J., Liu, Q., Fu, Q.M., et al.: Bayesian Q learning method based on priority scanning Dyna structure. J. Commun. 11, 129–139 (2013)

    Google Scholar 

  11. Fu, Q.M., Liu, Q., Wang, H., et al.: An out-of-strategy Q(λ) algorithm based on linear function approximation. J. Comput. 37(3), 677–686 (2014)

    MathSciNet  Google Scholar 

  12. Chen, M.: Quasi-physical and quasi-human algorithm for solving protein folding problems. Huazhong University of Science and Technology (2007)

    Google Scholar 

  13. Lu, Q.G., Chen, D.F., Mao, L.M., et al.: Protein structure prediction based on simplified energy function and genetic algorithm. In: China Artificial Intelligence Annual Conference (2015)

    Google Scholar 

Download references

Acknowledgment

This work was supported in part by Hubei Province Natural Science Foundation of China (No. 2018CFB526).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhou Fengli .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Fengli, Z., Xiaoli, L. (2019). Research on HP Model Optimization Method Based on Reinforcement Learning. In: Huang, DS., Jo, KH., Huang, ZK. (eds) Intelligent Computing Theories and Application. ICIC 2019. Lecture Notes in Computer Science(), vol 11644. Springer, Cham. https://doi.org/10.1007/978-3-030-26969-2_46

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-26969-2_46

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-26968-5

  • Online ISBN: 978-3-030-26969-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics