Abstract
Combining multiple neural networks appears to be a very promising approach in improving neural network generalization since it is very difficult, if not impossible, to develop a solution that is close to global optimum using single neural network. In this paper, individual networks are developed from bootstrap re-sample of the original training and testing data sets. Instead of combining all the developed networks, this paper proposed backward elimination. In backward elimination, all the individual networks are initially aggregated and some of the individual networks are then gradually eliminated until the aggregated network error on the original training and testing data sets cannot be further reduced. The proposed techniques are applied to nonlinear process modeling and application results demonstrate that the proposed techniques can significantly improve model performance better than aggregating all the individual networks.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Wolpert, D.H.: Stacked Generalization. Neural Networks 5, 241–250 (1995)
Heartz, J.A., Krogh, A., Palmer, R.G.: Introduction to the Theory of Neural Network Computation. Addison-Wesley, Redwood City (1991)
Bishop, C.: Neural Networks for Pattern Recognition. Clarendon Press, Oxford (1995)
Caruana, R., Lawrence, S., Giles, L.C.: Overfitting in Neural Networks: Backpropagation, Conjugate Gradient and Early Stopping. Neural Information Processing System 13, 402–408 (2000)
Hashem, S.: Optimal Linear Combination. Neural Networks 10(4), 599–614 (1994)
Mc Loone, S., Irwin, G.: Improving Neural Networks Training Solution Using Regularization. Neurocomnputing 37, 71–90 (2001)
Perrone, M.P., Cooper, L.N.: When Networks Disagree: Ensembles Methods for Hybrid Neural Networks. In: Mammone, R.J. (ed.) Artificial Neural Networks for Speech and Vision, pp. 126–142. Chapman and Hall, London (1993)
Hagiwara, K., Kuno, K.: Regularization Learning and Early Stopping in Linear Networks. In: International Joint Conference on Neural Networks, pp. 511–516 (2000)
Ahmad, Z., Zhang, J.: Bayesian Selective Combination of Multiple Neural Networks for Improving Long-Range Predictions in Nonlinear Process Modelling. Neural Computing and Applications 14, 78–87 (2005)
Partalas, I., Tsoumakas, G., Hatzikos, E.V., Vlahayas, I.: Greedy Regression Ensemble Selection: Theory and an Application to Water Quality Prediction. Information Science 178, 3867–3879 (2008)
Mc Avoy, T.J., Hsu, E., Lowenthal, S.: Dynamics of pH in Controlled Stirred Tank Reactor. Ind. Chem. Process Des. Dev. 11, 68–70 (1972)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Ahmad, Z., Noor, R.A.M. (2009). Improving Multi Step-Ahead Model Prediction through Backward Elimination Method in Multiple Neural Networks Combination. In: Köppen, M., Kasabov, N., Coghill, G. (eds) Advances in Neuro-Information Processing. ICONIP 2008. Lecture Notes in Computer Science, vol 5507. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-03040-6_57
Download citation
DOI: https://doi.org/10.1007/978-3-642-03040-6_57
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-03039-0
Online ISBN: 978-3-642-03040-6
eBook Packages: Computer ScienceComputer Science (R0)