Skip to main content
Log in

Structure and weight optimization of neural network based on CPA-MLR and its application in naphtha dry point soft sensor

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Structure and weight of neural networks play an important role in the predicting performance of neural networks. In order to overcome the main flaws of neural networks, such as under-fitting, over-fitting or wasting computational resource, correlation pruning algorithm combined with multiple linear regression (CPA-MLR) is proposed to optimize the structure and weight of neural networks. Firstly, an initial three-layer network with the maximum nodes of hidden layer is selected, and BP is employed to train it. Secondly, correlation analysis of the hidden-layer output is carried out to confirm the redundant hidden nodes. Thirdly, the redundant nodes will be deleted one by one, and a multiple linear regression model between the output of the hidden layer and the expected input of the output layer, which can be obtained through the inverse function of the output-layer node, is employed to obtain their optimal weight. Finally, the optimal structure of the neural networks, which is corresponding to the best predicting performance of the neural networks, is obtained. Further, a practical example, that is developing naphtha dry point soft sensor, is employed to illustrate the performance of CPA-MLR. The results show that the predicting performance of the soft sensor is improved and then decreased with deleting the redundant nodes, and the optimal predicting performance is obtained with the optimal hidden nodes.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  1. Xuefeng Y (2010) Hybrid artificial neural network based on BP-PLSR and its application in development of soft sensors. Chemom Intell Lab Syst 103:152–159

    Article  Google Scholar 

  2. Wang L, Shao C, Wang H, Hong W (2006) Radial basis function neural networks-based modeling of the membrane separation process: hydrogen recovery from refinery gases. J Nat Gas Chem 15:230–234

    Article  Google Scholar 

  3. Hagan MT, Demuth HB (1999) Neural networks for control. Am Cont Conf Proc 3:1642–1656

    Google Scholar 

  4. Jiménez A, Beltrán G, Aguilera MP, Uceda M (2008) A sensor-software based on artificial neural network for the optimization of olive oil elaboration process. Sens Actuators, B 129:985–990

    Article  Google Scholar 

  5. Su HB, Fan LT, Schlup JR (1998) Monitoring the process of curing of epoxy/graphite fiber composites with a recurrent neural network as a soft sensor. Eng Appl Artif Intel 11:293–306

    Article  Google Scholar 

  6. Lehtokangas M (1999) Modeling with constructive back propagation. Neural Netw 12:707–716

    Article  Google Scholar 

  7. Nguye MH, Hussein AA, Robert IM (2005) Stopping criteria for ensemble of evolutionary artificial neural networks. Appl Soft Comput 6:100–107

    Article  Google Scholar 

  8. Zur RM, Jiang Y, Metz CE (2004) Comparison of two methods of adding jitter to artificial neural network training. Int Congr Ser 1268:886–889

    Article  Google Scholar 

  9. Le Cun Y, Denker JS, Solla SA (1990) Optimal brain damage. In: Touretzky DS (ed) Advances in neural information processing systems 2. Morgan Kaufmann, San Mateo, pp 598–605

    Google Scholar 

  10. Xiqun L, Yinglin Y (1997) A method of dynamic pruning the hidden layer nodes in a feedforward neural network. Chin J Control Theory J Control Theory (China) 14:101–104

    Google Scholar 

  11. Luchetta A (2008) Automatic generation of the optimum threshold for parameter weighted pruning in multiple heterogeneous output neural networks. Neurocomputing 71:3553–3560

    Article  Google Scholar 

  12. Delashmit WH (2003) Multilayer perceptron structured initialization and separating mean processing. University of Texas, Dissertation

    Google Scholar 

  13. MacLeod C, Maxwell G, Muthuraman S (2009) Incremental growth in modular neural networks. Eng Appl Artif Intel 22:660–666

    Article  Google Scholar 

  14. Qian L, Yongxian W, Youqin Z (2005) Hybrid pruning algorithm for artificial network training, Chin J Tsinghua Uni Sci Tech Tsinghua University (Science and Technology) (China) 45:831–834

    Google Scholar 

  15. Ballabio D, Vasighi M, Consonni V, Kompany-Zareh M (2011) Genetic algorithms for architecture optimisation of counter-propagation artificial neural networks. Chemometr Intell Lab 105:56–64

    Article  Google Scholar 

  16. Shudong HE, Tan QU, Xiangqing H, Xinhan H (1998) Survey of architecture for multilayer feedforward neural networks. Contr Theor Appl Control Theory Appl 15:313–319

    Google Scholar 

  17. Jinhua X, Ho DWC (2006) A new training and pruning algorithm based on node dependence and jacobian rank deficiency. Neurocomputing 70:544–558

    Article  Google Scholar 

  18. Zeng X, Yeung DS (2006) Hidden neuron pruning of multilayer perceptrons using a quantified sensitivity measure. Neurocomputing 69:825–837

    Article  Google Scholar 

  19. Schittenkopf C, Deco G, Brauer W (1997) Two strategies to avoid overfitting in feedforward networks. Neural Netw 10:505–516

    Article  Google Scholar 

  20. Funahashi K-L (1989) On the approximate realization of continuous mapping by neural networks. Neural Netw 2:183–192

    Article  Google Scholar 

  21. Hornik K, Stinchcombe M, White H (1989) Multilayer feedforward networks are universal approximators. Neural Netw 2:359–366

    Article  Google Scholar 

  22. Hornik K, Stinchcombe M, White H (1990) Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networks. Neural Netw 3:551–560

    Article  Google Scholar 

  23. Yan X (2005) Development of naphtha dry point soft sensor by adaptive partial least square regression. Chin J Chem Eng (China) 56:1511–1515

    Google Scholar 

  24. Zhou G, Si J (1999) Subset-based training and pruning of sgmoid neural network. Neural Netw 12:79–89

    Article  Google Scholar 

Download references

Acknowledgments

The authors gratefully acknowledge the supports from the following foundations: National Natural Science Foundation of China (20776042), Doctoral Fund of Ministry of Education of China (20090074110005), Program for New Century Excellent Talents in University (NCET-09-0346), “Shu Guang” project (09SG29) and the Fundamental Research Funds for the Central Universities.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xuefeng Yan.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Wang, Y., Chen, C. & Yan, X. Structure and weight optimization of neural network based on CPA-MLR and its application in naphtha dry point soft sensor. Neural Comput & Applic 22 (Suppl 1), 75–82 (2013). https://doi.org/10.1007/s00521-012-1044-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-012-1044-9

Keywords

Navigation