Skip to main content

Advertisement

Log in

Deterministic behavior of temperature field in turboprop engine via shallow neural networks

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

A study of machine learning approaches for temperature field (rake) prediction in a turboprop engine is presented. The potential of supervised machine learning and shallow feed-forward neural network (NN) architectures is studied to predict and reconstruct temperature fields based on a single sensor measurement. The reason to study shallow (not deep nor gated) NN first before more complex networks is that shallow NN can provide us with robust nonlinear mapping at a low computational cost. Simultaneously, their mathematical structures can be simple, especially for a limited amount of training data. Thus, revealing a governing law in data via a simpler architecture is desirable. Further, the problem suits feed-forward architectures as only one temperature sensor is considered input in real time. It is investigated which type of shallow NN architectures with which learning algorithm can be most accurate with the best generalization for provided turboprop temperature field data. The important finding is that it is possible to capture the deterministic governing law of temperatures in a turboprop engine. Thus, the temperature sensor locations in rakes can be analyzed to allocate the important positions for a limited number of temperature sensors inside the engine. The machine learning results also confirm the importance of slow heat transfer between internal engine parts and temperature sensors alongside the air propulsion. Thus, the proposed neural network application concept is promising as a funding base for further design of modern turboprop health monitoring system.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

Explore related subjects

Discover the latest articles and news from researchers in related subjects, suggested using machine learning.

Data availability and material

Data from the industrial partner may not be public as such.

References

  1. Glorot X, Bordes A, Bengio Y (2011) Deep sparse rectifier neural networks. In: Gordon G, Dunson D, Dudík M (eds) Proceedings of the fourteenth international conference on artificial intelligence and statistics. PMLR, Fort Lauderdale, FL, USA, pp. 315–323

  2. LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521:436–444. https://doi.org/10.1038/nature14539

    Article  Google Scholar 

  3. Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9:1735–1780. https://doi.org/10.1162/neco.1997.9.8.1735

    Article  Google Scholar 

  4. Putting the 'smart' in 'smart factory'—OKUMA's New AI-infused CNC control | Machining news. http://www.machiningnews.com/2017/01/putting-the-smart-in-smart-factory-okumas-new-ai-infused-cnc-control/.

  5. Vaswani A, Shazeer N, Parmar N et al (2017) Attention is all you need. In: Guyon I, Luxburg UV, Bengio S et al (eds) Advances in neural information processing systems 30. Curran Associates Inc, Unites States, pp 5998–6008

    Google Scholar 

  6. Srivastava A, Meade AJ (2015) A comprehensive probabilistic framework to learn air data from surface pressure measurements. Int J Aerosp Eng 2015:1–19. https://doi.org/10.1155/2015/183712

    Article  Google Scholar 

  7. Pang S, Yang X, Zhang X (2016) Aero engine component fault diagnosis using multi-hidden-layer extreme learning machine with optimized structure. Int J Aerosp Eng 2016:1–11. https://doi.org/10.1155/2016/1329561

    Article  Google Scholar 

  8. Yang X, Pang S, Shen W et al (2016) Aero engine fault diagnosis using an optimized extreme learning machine. Int J Aerosp Eng 2016:1–10. https://doi.org/10.1155/2016/7892875

    Article  Google Scholar 

  9. Bazazzadeh M, Badihi H, Shahriari A (2011) Gas turbine engine control design using fuzzy logic and neural networks. Int J Aerosp Eng 2011:1–12. https://doi.org/10.1155/2011/156796

    Article  Google Scholar 

  10. Gupta MM, Jin L, Homma N (2003) Static and dynamic neural networks: from fundamentals to advanced theory. Wiley, New York

    Book  Google Scholar 

  11. Hornik K, Stinchcombe M, White H (1989) Multilayer feedforward networks are universal approximators. Neural Netw 2:359–366. https://doi.org/10.1016/0893-6080(89)90020-8

    Article  MATH  Google Scholar 

  12. LeCun YA, Bottou L, Orr GB, Müller K-R (2012) Efficient BackProp. In: Montavon G, Orr GB, Müller K-R (eds) Neural networks: tricks of the trade, 2nd edn. Springer, Berlin, pp 9–48

    Chapter  Google Scholar 

  13. Hahnloser RHR, Sarpeshkar R, Mahowald MA et al (2000) Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit. Nature 405:947

    Article  Google Scholar 

  14. Huang G-B, Zhu Q-Y, Siew C-K (2006) Extreme learning machine: theory and applications. Neurocomputing 70:489–501. https://doi.org/10.1016/j.neucom.2005.12.126

    Article  Google Scholar 

  15. Levenberg K (1944) A method for the solution of certain non-linear problems in least squares. Q Appl Math 2:164–168

    Article  MathSciNet  Google Scholar 

  16. Marquardt DW (1963) An algorithm for least-squares estimation of nonlinear parameters. J Soc Ind Appl Math 11:431–441. https://doi.org/10.1137/0111030

    Article  MathSciNet  MATH  Google Scholar 

  17. Hestenes MR, Stiefel E (1952) Methods of conjugate gradients for solving linear systems. J Res Natl Bur Stand 49:409. https://doi.org/10.6028/jres.049.044

    Article  MathSciNet  MATH  Google Scholar 

  18. Kingma DP, Ba J (2014) Adam: A method for stochastic optimization

  19. Fletcher R (1987) Practical methods of optimization, 2nd edn. Wiley, New York

    MATH  Google Scholar 

  20. Kellner T (2018) Fired up: GE Successfully tested its advanced turboprop engine with 3D-printed parts—GE Reports. General electric, online at https://www.ge.com/reports/ge-fired-its-3d-printed-advanced-turboprop-engine

  21. sklearn.neural_network.MLPRegressor—scikit-learn 0.21.1 documentation. https://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPRegressor.html

  22. scipy.optimize.minimize—SciPy v1.2.1 Reference Guide. https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html

  23. United States (2016) Airplane Flying Handbook (FAA-H-8083-3B). U.S. Dept. of Transportation, Federal Aviation Administration, Airman Testing Standards Branch, AFS-630, P.O. Box 25082, Oklahoma City, OK 73125,USA.

  24. Yuhas AJ, Ray RJ, Burley RR, Steenken WG, Lechtenberg L, Thornton D (1994) Design and development of an F/A-18 inlet distortion rake—a cost and time saving solution. In: Biennial flight test conference. American Institute of Aeronautics and Astronautics, SC, USA.

  25. Gupta MM, Homma N, Hou ZG, Solo AM, Bukovsky I (2010) Higher order neural networks: fundamental theory and applications. Artificial higher order neural networks for computer science and engineering: trends for emerging applications. Pennsylvania, IGI Global, pp 397–422

    Chapter  Google Scholar 

  26. Gupta MM, Bukovsky I, Homma N et al (2013) Fundamentals of higher order neural networks for modeling and simulation. In: Zhang M (ed) Artificial higher order neural networks for modeling and simulation. IGI Global, Hershey, pp 103–133

    Chapter  Google Scholar 

  27. Bukovsky I, Homma N (2017) An approach to stable gradient-descent adaptation of higher order neural units. IEEE Trans Neural Netw Learn Syst 28:2022–2034. https://doi.org/10.1109/TNNLS.2016.2572310

    Article  MathSciNet  Google Scholar 

  28. Bukovsky I, Lepold M, Bila J (2010) Quadratic neural unit and its network in validation of process data of steam turbine loop and energetic boiler. In: The 2010 International joint conference on neural networks (IJCNN), pp. 1–7

  29. Pao YH, Takefuji Y (1992) Functional-link net computing: theory, system architecture, and functionalities. Computer 25:76–79. https://doi.org/10.1109/2.144401

    Article  Google Scholar 

  30. Schmidt WF, Kraaijveld MA, Duin RPW (1992) Feedforward neural networks with random weights. In: Proceedings 11th IAPR International conference on pattern recognition. Vol.II. conference B: pattern recognition methodology and systems, pp. 1–4

  31. Suganthan PN (2018) Letter: on noniterative learning algorithms with closed-form solution. Appl Soft Comput 70:1078–1082. https://doi.org/10.1016/j.asoc.2018.07.013

    Article  Google Scholar 

  32. Jones E, Oliphant T, Peterson P others (2001) SciPy: open source scientific tools for Python

  33. Pedregosa F, Varoquaux G, Gramfort A et al (2011) Scikit-learn: machine learning in python. J Mach Learn Res 12:2825–2830

    MathSciNet  MATH  Google Scholar 

  34. El-Nabarawy I, Abdelbar AM, Wunsch DC (2013) Levenberg-Marquardt and conjugate gradient methods applied to a high-order neural network. In: The 2013 International joint conference on neural networks (IJCNN), pp. 1–7 IEEE

Download references

Acknowledgements

The author acknowledges support from the Technology Agency of the Czech Republic, under the National Centres of Competence 1: Support programme for applied research, experimental development and innovation, from the National Centre of Competence for Aeronautics and Space (TN01000029) and from the Faculty of Mechanical Engineering, Czech Technical University in Prague. The author also acknowledges support from the ESIF EU Operational Programme Research, Development and Education, and from the Center of Advanced Aerospace Technology (CZ.02.1.01/0.0/0.0/16_019/0000826), Faculty of Mechanical Engineering, Czech Technical University in Prague.

Funding

TACR, NCC 1, NaCCAS, TN01000029, and  EU OPRDE, CAAT, CZ.02.1.01/0.0/0.0/16_019/0000826.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ivo Bukovsky.

Ethics declarations

Conflict of interest

The author declares no conflict of interest nor competing interests.

Code availability

Custom code or its parts created by the author may be available upon request.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

See Tables 2, 3 and Figs. 14, 15.

Table 2 Details for testing all neural architectures and learning algorithms for dataset I as in Fig. 5 (the best achieved results are in bold)
Table 3 Details for testing all neural architectures and learning algorithms for dataset II as in Fig. 6 (the best achieved results are in bold)
Fig. 14
figure 14

Dataset I: The example of prediction results comparison showing the mapping of measured temperatures \(\theta_{17} = \theta_{17} (\theta_{1} )\) and the trained neural architectures \(\tilde{y}_{HONU17} = \tilde{y}_{HONU17} (\theta_{1} )\) and \(\tilde{y}_{MLP17} = \tilde{y}_{MLP17} (\theta_{1} )\) (z-scored data)

Fig. 15
figure 15

Dataset II: The example of prediction results comparison showing the mapping of measured temperatures \(\theta_{19} = \theta_{19} (\theta_{13} )\) and the trained neural architectures and \(\tilde{y}_{MLP19} = \tilde{y}_{MLP19} (\theta_{13} )\)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bukovsky, I. Deterministic behavior of temperature field in turboprop engine via shallow neural networks. Neural Comput & Applic 33, 13145–13161 (2021). https://doi.org/10.1007/s00521-021-06013-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-021-06013-7

Keywords