Skip to main content

Need for Optimisation Techniques to Select Neural Network Algorithms for Process Modelling of Reduction Cell

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1886))

Abstract

While there exists a broad range of neural networks for a particular task, different neural network architectures are selected depending upon the nature of application in industry. The range of applications covers anything from performance estimation and pattern recognition to process modelling and control. The network selection can be carried out based on economic considerations, such as cost associated with neural network computation time and obtaining data for required model variables. While each of the selected models can be a possible solution, depending upon the performance criteria, they all can be ranked from most suitable to least suitable for a particular application. In this paper, appraisal of neural networks for three industrial applications, involving process modelling of reduction cells for aluminium production, is discussed. Regression analysis techniques and six neural network models are assessed for their performance, using specific assessment criteria. It is shown that there is no single model that is most appropriate for each of the assessment criteria considered in each instance, hence, the decision of which neural network model is most suitable for a specific application is complex, particularly as the assessment criteria are not fundamentally of equal significance. It is shown that optimisation techniques are necessary to select an appropriate model for an application.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Caudill, M. and Butler, C., “Naturally Intelligent Systems”, Massachusetts Institute of Technology, 1990.

    Google Scholar 

  2. Caudill, M. and Butler, C., “Understanding Neural Networks-Computer Explorations”, vol. 1, Massachusetts Institute of Technology, 1992.

    Google Scholar 

  3. Hertz, J., Krogh, A. and Palmer, R. G., “Introduction to the Theory of Neural Computing”, Addison-Wesley Publishing Company, 1991.

    Google Scholar 

  4. Zurada, J. M., “Introduction to Artificial Neural Systems”, West Publishing Company, 1992.

    Google Scholar 

  5. Karri, V. and Frost, F., “Optimum Backpropagation Network Conditions with Respect to Computation Time and Output Accuracy”, Proc. International Conference on Computational Intelligence and Multimedia Applications (ICCIMA), Sep. 1999, New Delhi, India, pp. 50–54.

    Google Scholar 

  6. Karri, V., “RBF Neural Networks For Thrust and Torque Predictions in Drilling Operations”, Proc. International Conference on Computational Intelligence and Multimedia Applications (ICCIMA), Sep. 1999, New Delhi, India, pp. 55–60.

    Google Scholar 

  7. Frost, F. and Karri, V., “Performance Comparison of BP and GRNN Models of the Neural Network Paradigm Using a Practical Industrial Application”, Proc. 6th International Conference on Neural Information Processing (ICONIP), Nov. 1999, Perth., pp 1069–1075.

    Google Scholar 

  8. Karri, V. and Frost, F., “Effect of Altering the Gaussian Function Receptive Field Width in RBF Neural Networks on Aluminium Fluoride Prediction in Industrial Reduction Cells”, Proc. 6th International Conference on Neural Information Processing (ICONIP), Nov. 1999, Perth., pp 101–106.

    Google Scholar 

  9. Frost, F. and Karri, V., “Intelligent Control of Aluminium Reduction Cells Using Backpropagation Neural Networks”, Proc. International Conference on Advances in Intelligent Systems: Theory and Applications (AISTA), Feb. 2000, Canberra, Australia, pp. 350–356.

    Google Scholar 

  10. Karri, V. and Frost, F., “Combined Kohonen and RBF Networks to Predict Electrolyte Additives in Hall-Heroult Cell”, Proc. International Conference on Advances in Intelligent Systems: Theory and Applications (AISTA), Feb. 2000, Canberra, Australia, pp. 19–24.

    Google Scholar 

  11. Sarle, W., “How to Measure Importance of Inputs”, ftp://ftp.sas.com/pub/neural/FAQ.html, Apr. 24,1999.

  12. Moore, D. S. and McCabe, G. P., “Introduction to the Practice of Statistics”, W. H. Freeman and Company, 1989.

    Google Scholar 

  13. Rumelhart, D. E. and McClelland, J. L., “Parallel Distributed Processing: Explorations in the Microstructure of Cognition”, vol. 1, Cambridge: The MIT Press, 1988.

    Google Scholar 

  14. Khanna, T., “Foundations of Neural Networks”, Massachusetts: Addison-Wesley, 1990.

    MATH  Google Scholar 

  15. Song, X. M., “Radial Basis Function Networks”, http://www.cs.helsinki.fi/~xianming/thesis/m_conten.html, 13th Oct. 1998.

  16. Lowe, D., “Radial Basis Function Networks”, Neural Computing Research Group, Aston University, Aston Triangle, Birmingham, 1988, pp. 1–14.

    Google Scholar 

  17. Lowe, D., “Radial Basis Function Networks and Statistics”, Neural Computing Research Group, Aston University, Aston Triangle, Birmingham, 1988, pp. 1–32.

    Google Scholar 

  18. Broomhead, D. S. and Lowe, D., “Multi-Variable Functional Interpolation and Adaptive Networks”, Complex Systems 2, 1988, pp. 321–355.

    MATH  MathSciNet  Google Scholar 

  19. Kosko, B., “Neural Networks and Fuzzy Systems: A Dynamical Systems Approach to Machine Intelligence”, Prentice-Hall, Inc., 1992.

    Google Scholar 

  20. Kohonen, T., “Self-Organisation and Associative Memory”, Berlin, Springer-Verlag, 1984.

    Google Scholar 

  21. Kohonen, T., “Adaptive, Associative and Self-Organisation Functions in Neural Computing”, Applied Optics, vol. 26, 1987, pp. 4910–4918.

    Article  Google Scholar 

  22. Kohonen, T., “Self-Organised Formation of Topologically Correct Feature Maps”, Biological Cybernetics, vol. 43, 1982, pp. 59–69.

    Article  MATH  MathSciNet  Google Scholar 

  23. Kohonen, T., “An Introduction to Neural Computing”, Neural Networks, vol. 1, 1988, p. 4.

    Article  Google Scholar 

  24. Specht, D. F., “General Regression Neural Networks”, Institute of Electrical and Electronic Engineers Transactions on Neural Networks, vol. 2, no. 6, Nov. 1991, pp. 568–576.

    Article  Google Scholar 

  25. Masters, T., “Advanced Algorithms for Neural Networks: A C++ Sourcebook”, John Wiley and Sons, 1995.

    Google Scholar 

  26. Shaffer, R., “General Regression Neural Networks”, http://cheml.nrl.navy/~shatter/grnn.html, 1998.

  27. Sarle, W., “FAQ for comp.ai.neural-net, What is a GRNN?”, part 2, ftp://ftp.sas.com/pub/neural/FAQ.html, 1997.

  28. Grjotheim, K. and Kvande, H., “Understanding the Hall-Heroult Process for Production of Aluminium”, Aluminium-Verlag, Dusseldorf, 1986.

    Google Scholar 

  29. Haupin, W. E., “Principles of Aluminium Electrolysis”, Proc. 124th TMS Annual Meeting, Las Vegas, Feb. 12–16, 1995, pp. 195–203.

    Google Scholar 

  30. Grjotheim, K. and Welch, B. L, “Aluminium Smelter Technology”, Aluminium-Verlag, 1988.

    Google Scholar 

  31. Matheou, N., “Electrolyte Control in Aluminium Cell”, Proc. Al. Fund., 1994.

    Google Scholar 

  32. Huang, S. H. and Zhang, H. C, “Artificial Neural Networks in Manufacturing: Concepts, Applications and Perspectives”, Institute of Electrical and Electronic Engineers Transactions on Components, Packaging and Manufacturing Technology, pt. A, vol. 17, no. 2, 1994, pp. 212–228.

    Article  Google Scholar 

  33. Frost, F. and Karri, V., “Determining the Influence of Input Parameters on BP Neural Network Output Error Using Sensitivity Analysis”, Proc. International Conference on Computational Intelligence and Multimedia Applications (ICCIMA), Sep. 1999, New Delhi, India, pp. 45–49.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2000 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Karri, V., Frost, F. (2000). Need for Optimisation Techniques to Select Neural Network Algorithms for Process Modelling of Reduction Cell. In: Mizoguchi, R., Slaney, J. (eds) PRICAI 2000 Topics in Artificial Intelligence. PRICAI 2000. Lecture Notes in Computer Science(), vol 1886. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44533-1_49

Download citation

  • DOI: https://doi.org/10.1007/3-540-44533-1_49

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-67925-7

  • Online ISBN: 978-3-540-44533-3

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics