Abstract
Reasonable burden distribution matrix is one of important requirements that can realize low consumption, high efficiency, high quality and long campaign life of the blast furnace. This paper proposes a data-driven prediction model of adjusting the burden distribution matrix based on the improved multilayer extreme learning machine (ML-ELM) algorithm. The improved ML-ELM algorithm is based on our previously modified ML-ELM algorithm (named as PLS-ML-ELM) and the ensemble model. It is named as EPLS-ML-ELM. The PLS-ML-ELM algorithm uses the partial least square (PLS) method to improve the algebraic property of the last hidden layer output matrix for the ML-ELM algorithm. However, the PLS-ML-ELM algorithm may have different results in different trails of simulations. The ensemble model can overcome this problem. Moreover, it can improve the generalization performance. Hence, the EPLS-ML-ELM algorithm is consisted of several PLS-ML-ELMs. The real blast furnace data are used to testify the data-driven prediction model. Compared with other prediction models which are based on the SVM algorithm, the ELM algorithm, the ML-ELM algorithm and the PLS-ML-ELM algorithm, the simulation results demonstrate that the data-driven prediction model based on the EPLS-ML-ELM algorithm has better prediction accuracy and generalization performance.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Bengio Y (2009) Learning deep architectures for ai. Found Trends Mach Learn 2(1):1–127
Cao JW, Lin ZP, Huang GB, Liu N (2012) Voting based extreme learning machine. Inf Sci 185(1):66–77
Ding S, Zhang N, Xu X, Guo LL, Zhang J (2015) Deep extreme learning machine and its application in EEG classification. Math Probl Eng 11. https://doi.org/10.1155/2015/129,021 (Article ID 129021)
Ding SF, Zhang N, Zhang J, Xu XZ, Shi ZZ (2017) Unsupervised extreme learning machine with representational features. Int J Mach Learn Cybern 8(2):587–595
Geerdes M, Toxopeus H, der Vliet CV, Chaigneau R, Vander T (2009) Modern blast furnace ironmaking: an introduction, vol 4. IOS Press, Amsterdam
Geladi P, Kowalski BR (1986) Partial least-squares regression: a tutorial. Anal Chim Acta 185:1–17
Hansen LK, Salamon P (1990) Neural network ensembles. IEEE Trans Pattern Anal Mach Intell 12(10):993–1001
Huang G, Liu TC, Yang Y, Lin ZP, Song SJ, Wu C (2015) Discriminative clustering via extreme learning machine. Neural Netw 70:1–8
Huang GB, Zhu QY, Siew CK (2004) Extreme learning machine: a new learning scheme of feedforward neural networks. In: 2004 IEEE international joint conference on neural networks, 2004. Proceedings, vol 2. pp 985–990
Huang GB, Zhu QY, Siew CK (2006) Extreme learning machine: theory and applications. Neurocomputing 70(1):489–501
Huang GB, Zhou H, Ding X, Zhang R (2012) Extreme learning machine for regression and multiclass classification. IEEE Trans Syst Man Cybern B Cybern 42(2):513–529
Huang ZY, Yu YL, Gu J, Liu HP (2017) An efficient method for traffic sign recognition based on extreme learning machine. IEEE Trans Cybern 47(4):920–933
Kasun LLC, Zhou H, Huang GB, Vong CM (2013) Representational learning with elms for big data. IEEE Intell Syst 28(6):31–34
LeCun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proc IEEE 86(11):2278–2324
Li MB, Huang GB, Saratchandran P, Sundararajan N (2005) Fully complex extreme learning machine. Neurocomputing 68:306–314
Liang NY, Huang GB, Saratchandran P, Sundararajan N (2006) A fast and accurate online sequential learning algorithm for feedforward networks. IEEE Trans Neural Netw 17(6):1411–1423
Liu MM, Liu B, Zhang C, Wang WD, Sun W (2017) Semi-supervised low rank kernel learning algorithm via extreme learning machine. Int J Mach Learn Cyb 8(3):1039–1052
Liu YC (2012) The law of blast furnace. Metallurgical Industry Press, Beijing
Mao WT, Wang JW, Xue ZN (2017) An elm-based model with sparse-weighting strategy for sequential data imbalance problem. Int J Mach Learn Cybern 8(4):1333–1345
Peacey JG, Davenport WG (2016) The iron blast furnace: theory and practice. Elsevier, Amsterdam
Radhakrishnan VR, Ram KM (2001) Mathematical model for predictive control of the bell-less top charging system of a blast furnace. J Process Control 11(5):565–586
Shi L, Zhao GS, Li MX, Ma X (2016) A model for burden distribution and gas flow distribution of bell-less top blast furnace with parallel hoppers. Appl Math Model 40(23):10254–10273
StatLib (1997) California housing data set. http://www.dcc.fc.up.pt/ltorgo/Regression/cal_housing
Storn R, Price K (1997) Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces. J Glob Optim 11(4):341–359
Su XL, Yin YX, Zhang S (2016) Prediction model of improved multi-layer extreme learning machine for permeability index of blast furnace. Control Theory Appl 33(12):1674–1684
Tang J, Deng C, Huang GB (2016) Extreme learning machine for multilayer perceptron. IEEE Trans Neural Netw Learn Syst 27(4):809–821
UCI (1990) Image segmentation data set. http://archive.ics.uci.edu/ml/datasets/Image+Segmentation
UCI (1991) Letter recognition data set. http://archive.ics.uci.edu/ml/datasets/Letter+Recognition
UCI (1995) Abalone data set. http://archive.ics.uci.edu/ml/datasets/Abalone
Wang XZ, Chen AX, Feng HM (2011) Upper integral network with extreme learning mechanism. Neurocomputing 74(16):2520–2525
Wang XZ, Shao QY, Miao Q, Zhai HH (2013) Architecture selection for networks trained with extreme learning machine using localized generalization error model. Neurocomputing 102(1):3–9
Wold S, Trygg J, Berglund A, Antti H (2001) Some recent developments in PLS modeling. Chemometr Intell Lab 58(2):131–150
Xue XW, Yaon M, Wu ZH, Yang JH (2014) Genetic ensemble of extreme learning machine. Neurocomputing 129:175–184
Yang YM, Wu QMJ (2016) Multilayer extreme learning machine with subnetwork nodes for representation learning. IEEE Trans Cybern 46(11):2570–2583
Yang YM, Wu QMJ, Wang YN, Zeeshan KM, Lin XF, Yuan XF (2015) Data partition learning with multiple extreme learning machines. IEEE Trans Cybern 45(8):1463–1475
Yang YM, Wu QMJ, Wang YN (2016) Autoencoder with invertible functions for dimension reduction and image reconstruction. IEEE Trans Syst Man Cybern Syst. https://doi.org/10.1109/TSMC.2016.2637279
Zhai JH, Xu HY, Wang XZ (2012) Dynamic ensemble extreme learning machine based on sample entropy. Soft Comput 16(9):1493–1502
Zhai JH, Zhang SF, Wang CX (2017) The classification of imbalanced large data sets based on mapreduce and ensemble of elm classifiers. Int J Mach Learn Cybern 8(3):1009–1017
Zhang HG, Yin YX, Zhang S (2016) An improved elm algorithm for the measurement of hot metal temperature in blast furnace. Neurocomputing 174:232–237
Zhu QY, Qin AK, Suganthan PN, Huang GB (2005) Evolutionary extreme learning machine. Pattern Recognit 38(10):1759–1763
Zong WW, Huang GB, Chen YQ (2013) Weighted extreme learning machine for imbalance learning. Neurocomputing 101:229–242
Acknowledgements
This work was supported by the National Nature Science Foundation of China under Grants No. 61673056, the Key Program of National Nature Science Foundation of China under Grant No. 61333002, the Beijing Natural Science Foundation (4182039), the National Nature Science Foundation of China under Grants No. 61673055 and the Beijing Key Discipline Construction Project (XK100080537).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare no conflict of interest.
Ethical approval
This article does not contain any studies with human participants or animals performed by any of the authors.
Additional information
Communicated by X. Wang, A.K. Sangaiah, M. Pelillo.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendixes
Appendixes
In Appendixes, the proposed EPLS-ML-ELM algorithm is verified by using standard data sets. The ELM algorithm, the ML-ELM algorithm and the PLS-ML-ELM algorithm are also used to compare with the proposed EPLS-ML-ELM algorithm. There are two parts. Appendix A is for the regression problem, and Appendix B is for the classification problem.
1.1 Appendix A
For regression problem, the Abalone data set (UCI 1995) and the California Housing data set (StatLib 1997) are used to testify. The benchmark problems are shown in Table 6. The number of hidden layer nodes for the ELM algorithm is 100. The numbers of the whole hidden layers nodes for the ML-ELM algorithm with three hidden layers are 200, 150 and 100, respectively. For PLS-ML-ELM, the number of hidden layers and the number of every hidden layer nodes are same with ML-ELM’s, respectively. For EPLS-ML-ELM, the number of the whole PLS-ML-ELMs is 10. Each PLS-ML-ELM has same number of hidden layers and same number of every hidden layer nodes with ML-ELM’s. In addition, all the experiments are carried out 50 trials. In this part, the root mean square error (RMSE) is used as the evaluation criterion, and the representation is shown as
where n is the total number of testing data; \(X_i\) and \(\mathop {{X_i}}\nolimits ^ \wedge \) are actual data and prediction data at the ith sample.
The comparison results are shown in Table 7. These comparison results illustrate that the proposed EPLS-ML-ELM algorithm has better generalization performance than other algorithms.
1.2 Appendix B
For classification problem, the image segmentation data set (UCI 1990), the letter data set (UCI 1991) and the MNIST data set (LeCun et al. 1998) are used to testify. The benchmark problems are shown in Table 8. All the experiments are carried out 50 trials. The comparison results are shown in Tables 9, 10 and Fig. 8. These comparison results illustrate that the proposed EPLS-ML-ELM algorithm has better generalization performance than other algorithms.
For the letter data set, the parameter set of the whole algorithms is different from Appendix A. The number of hidden layer nodes for the ELM algorithm is 200. The numbers of nodes for the whole hidden layers for the ML-ELM algorithm with three hidden layers are 200, 200 and 400, respectively. For PLS-ML-ELM, the number of hidden layers and the number of every hidden layer nodes are same with ML-ELM’s in Appendix B, respectively. For EPLS-ML-ELM, the number of the whole PLS-ML-ELMs is 10, and each PLS-ML-ELM has same number of hidden layers and same number of every hidden layer nodes with ML-ELM’s in Appendix B.
For the MNIST data set, the parameter set of the whole algorithms is also different from Appendix A. The number of hidden layer nodes for the ELM algorithm is, respectively, set as 1000, 1500 and 2000. For ML-ELM with three hidden layers, the numbers of nodes for the whole hidden layers are, respectively, set as \(\left[ {700\mathrm{{ - }}700\mathrm{{ - }}1000} \right] \), \(\left[ {700\mathrm{{ - }}700\mathrm{{ - }}1500} \right] \) and \(\left[ {700\mathrm{{ - }}700\mathrm{{ - }}2000} \right] \). For PLS-ML-ELM, the number of hidden layers and the number of every hidden layer nodes are same with ML-ELM’s in Appendix B, respectively. For EPLS-ML-ELM, the number of the whole PLS-ML-ELMs is 5, and each PLS-ML-ELM has the same number of hidden layers and the same number of every hidden layer nodes with ML-ELM’s in Appendix B. In addition, the whole MNIST data set is used in each PLS-ML-ELM of EPLS-ML-ELM.
Rights and permissions
About this article
Cite this article
Su, X., Zhang, S., Yin, Y. et al. Data-driven prediction model for adjusting burden distribution matrix of blast furnace based on improved multilayer extreme learning machine. Soft Comput 22, 3575–3589 (2018). https://doi.org/10.1007/s00500-018-3153-6
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00500-018-3153-6