Skip to main content
Log in

An incremental neural learning framework and its application to vehicle diagnostics

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

This paper presents a framework for incremental neural learning (INL) that allows a base neural learning system to incrementally learn new knowledge from only new data without forgetting the existing knowledge. Upon subsequent encounters of new data examples, INL utilizes prior knowledge to direct its incremental learning. A number of critical issues are addressed including when to make the system learn new knowledge, how to learn new knowledge without forgetting existing knowledge, how to perform inference using both the existing and the newly learnt knowledge, and how to detect and deal with aged learnt systems. To validate the proposed INL framework, we use backpropagation (BP) as a base learner and a multi-layer neural network as a base intelligent system. INL has several advantages over existing incremental algorithms: it can be applied to a broad range of neural network systems beyond the BP trained neural networks; it retains the existing neural network structures and weights even during incremental learning; the neural network committees generated by INL do not interact with one another and each sees the same inputs and error signals at the same time; this limited communication makes the INL architecture attractive for parallel implementation. We have applied INL to two vehicle fault diagnostics problems: end-of-line test in auto assembly plants and onboard vehicle misfire detection. These experimental results demonstrate that the INL framework has the capability to successfully perform incremental learning from unbalanced and noisy data. In order to show the general capabilities of INL, we also applied INL to three general machine learning benchmark data sets. The INL systems showed good generalization capabilities in comparison with other well known machine learning algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Alippi C, Scotti F (2006) Exploiting application locality to design low-complexity highly performing, and power-aware embedded classifiers. IEEE Trans Neural Netw 17:745–753

    Article  Google Scholar 

  2. Ash T (1989) Dynamic node creation in back-propagation networks. Technical Report 8901, Institute for Cognitive Science, University of California, San Diego

  3. Baffes PT, Zelle JM (1992) Growing layers of perceptrons: introducing the extentron algorithm. In: Proceedings of the international joint conference on neural networks, Baltimore, MD, vol 2, pp II-392–II-397

  4. Bohn C (1997) An incremental unsupervised learning scheme for function approximation. In: IEEE IJCNN, pp 1792–1797

  5. Breiman L (1996) Bagging predictors. Mach Learn 26(2):123–140

    Google Scholar 

  6. Breiman L (1998) Arcing classifiers. Ann Stat 26(3):801–849

    Article  MATH  MathSciNet  Google Scholar 

  7. Carpenter GA, Tan A-H (1995) Rule extraction: from neural architecture to symbolic representation. Connect Sci 7(1):3–27

    Article  Google Scholar 

  8. Carpenter GA, Grossberg S, Markuzon N, Reynolds JH, Rosen DB (1992) Fuzzy ARTMAP: an adaptive resonance architecture for incremental supervised learning of analog maps. IEEE Trans Neural Netw 3:698–713

    Article  Google Scholar 

  9. Chen J, Chen C (2004) Reducing SVM classification time using multiple mirror classifiers. IEEE Trans Syst Man Cybern Part B 34:1173–1183

    Article  Google Scholar 

  10. Draghici S (2001) The constraint based decomposition (CBD) training architecture. Neural Netw 14:527–550

    Article  Google Scholar 

  11. Druckerm H, Schapire R, Simard P (1993) Boosting performance in neural networks. Int J Pattern Recognit Artif Intell 7(4):705–719

    Article  Google Scholar 

  12. Elman JL (1993) Learning and development in neural networks: the importance of starting small. Cognition 48:71–99

    Article  Google Scholar 

  13. Engelbrecht AP, Brits R (2001) A clustering approach to incremental learning for feedforward neural networks. In: Proceedings IJCNN ’01 international joint conference on neural networks, vol 3, pp 2019–2024

  14. Engelbrecht AP, Cloete I (1999) Incremental learning using sensitivity analysis. In: IJCNN ’99 international joint conference on neural networks, vol 2, pp 1350–1355

  15. Fahlman SE (1998) Fast learning variations on back-propagation: an empirical study. In: Proceedings of the 1988 connectionist models summer school. Kaufmann, Los Altos

    Google Scholar 

  16. Fahlman SE, Lebiere C (1990) The Cascade-Correlation Learning Architecture. In: Touretzky, D. (ed.) Advances in neural information processing systems, vol 2. Kaufmann, Los Altos, pp 524–532

    Google Scholar 

  17. Feldkamp LA, Puskorius GV (1998) A signal processing framework based on dynamic neural networks with application to problems in adaptation, filtering, and classification. Proc IEEE 86(11):2259–2277

    Article  Google Scholar 

  18. Frean M (1990) The Upstart algorithm: a method for constructing and training feedforward neural networks. Neural Comput 2:198–209

    Google Scholar 

  19. Freund Y (1993) Data filtering and distribution modeling algorithms for machine learning. PhD thesis, University of California at Santa Cruz

  20. Freund Y, Schapire R (1996) Experiments with a new boosting algorithm. In: Machine learning: proceedings of thirteenth international conference, pp 148–156

  21. Freund Y, Schapire R (1997) A decision-theoretic generalization of on-line learning and an application to boosting. J Comput Syst Sci 55(1):139–199

    MathSciNet  Google Scholar 

  22. Fu L, Hsu H, Principe JC (1996) Incremental backpropagation learning networks. IEEE Trans Neural Netw 7(3):757–761

    Article  Google Scholar 

  23. Guo H, Murphey YL (2001) Neural learning from unbalanced data using noise modeling. In: 14th international conference on industrial & engineering applications of artificial intelligence & expert systems, Budapest, Hungary, June 2001

  24. Hsu CW, Lin CJ (2002) A comparison of methods for multi-class support vector machines. IEEE Trans Neural Netw 13:415–425

    Article  Google Scholar 

  25. Inoue M, Park H, Okada M (2003) On-line learning theory of soft committee machines with correlated hidden units—steepest gradient descent and natural gradient descent. J Phys Soc Jpn 72(4):2003

    Article  Google Scholar 

  26. Kasabov N (2003) Evolving connectionist systems: methods and applications in bioinformatics, brain study and intelligent machines. Springer, New York

    Google Scholar 

  27. Kuncheva LI (2002) Switching between selection and fusion in combining classifiers: an experiment. IEEE Trans. Syst Man Cybern Part B 32:146–156

    Article  Google Scholar 

  28. Lee TC (1991) Structure level adaptation for artificial neural network. Kluwer, Boston

    Google Scholar 

  29. Lim CP, Harrison R (1997) An incremental adaptive network for online supervised learning and probability estimation. Neural Netw 10(5):925–939

    Article  Google Scholar 

  30. Loo CK, Rao MVC (2005) Accurate and reliable diagnosis and classification using probabilistic ensemble simplified fuzzy ARTMAP. IEEE Trans Knowl Data Eng 17:1589–1593

    Article  Google Scholar 

  31. Mandziuk J, Shastri L (2002) Incremental class learning approach and its application to handwritten digit recognition. Inf Sci 141(3–4):193–217

    Article  MATH  Google Scholar 

  32. Mezard M, Nadal J (1998) Learning in feedforward layered networks: the tiling algorithms. J Phys A 22:2191–2203

    Article  MathSciNet  Google Scholar 

  33. Mitchell TM (1997) Machine learning. McGraw–Hill, New York

    MATH  Google Scholar 

  34. Moody J (1989) Fast learning in multi-resolution hierarchies. In: Touretzky, D.S. (ed.) Advances in neural information processing systems, vol 1. Kaufmann, Los Altos

    Google Scholar 

  35. Murphey YL, Chen TQ (1999) Incremental learning in a fuzzy intelligent system. In: International joint conference on artificial intelligence (IJCAI), Sweden, August 1999

  36. Murphey YL, Chen TQ, Hamilton B (2000) A fuzzy system for automotive fault diagnosis—fast rule generation and self-tuning. IEEE Trans Veh 49(1)

  37. Murphey YL, Guo H, Crossman JA, Coleman M (2000) Automotive signal diagnostics using wavelets and machine learning. IEEE Trans Veh Technol 49(5):1650–1662

    Article  Google Scholar 

  38. Murphey YL, Chen ZH, Putrus M, Feldkamp LA (2003) SVM learning from large training data set. In: IEEE international joint conference on neural networks, July 2003

  39. Murphey YL, Guo H, Feldkamp LA (2004) Neural learning from unbalanced data. Appl Intell 21(2):117–128, Special issue on neural networks and applications

    Article  MATH  Google Scholar 

  40. Saad D (1999) On-line learning in neural networks. Cambridge University Press, Cambridge

    Google Scholar 

  41. Saad D, Solla S (1995) On-line learning in soft committee machines. Am Phys Soc 52(4):4225–4243

    Google Scholar 

  42. Schaal S, Atkeson C (1998) Constructive Incremental Learning from only Local Information. Neural Comput 10:2047–2084

    Article  Google Scholar 

  43. Schapire RE (1990) The strength of weak learnability. Mach Learn 5(2):197–227

    Google Scholar 

  44. Schapire RE (1999) Theoretical views of boosting and applications. In: Proceedings of algorithmic learning theory

  45. Schlimmer JC, Granger RH Jr (1986) Incremental learning from noisy data. Mach Learn 1:317–353

    Google Scholar 

  46. Schwenk H, Bengio Y (1997) Adaptive boosting of neural network for character recognition. Technical report #1072, University of Montreal

  47. Shiotani S, Fukuda T, Shibata T (1995) A neural network architecture for incremental learning. Neurocomputing 9:111–130

    Article  MATH  Google Scholar 

  48. Song Q, Kasabov N (2002) Dynamic evolving neuro-fuzzy inference system (DENFIS). IEEE Trans Fuzzy Syst 10(2):144–153

    Article  Google Scholar 

  49. Utgoff PE (1989) Incremental induction of decision trees. Mach Learn 4:161–186

    Article  Google Scholar 

  50. Utgoff PE, Brodley CE (1991) Linear machine decision trees. Technical report No. UM-CS-1991-010, University of Massachusetts, Amherst, Computer Science

  51. Utgoff PE, Berkman NC, Clouse J (1997) A decision tree induction based on efficient tree restructuring. Mach Learn 29:5–44

    Article  MATH  Google Scholar 

  52. Valiant LG (1984) A theory of the learnable. Commun ACM 27(11):1134–1142

    Article  MATH  Google Scholar 

  53. Yen O, Meesad P (1999) Pattern classification by an incremental learning fuzzy neural network. In: IJCNN ’99 international joint conference on neural networks, vol 5, pp 3230–3235, 1999

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yi L. Murphey.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Murphey, Y.L., Chen, Z.H. & Feldkamp, L.A. An incremental neural learning framework and its application to vehicle diagnostics. Appl Intell 28, 29–49 (2008). https://doi.org/10.1007/s10489-007-0040-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-007-0040-8

Keywords

Navigation