Skip to main content
Log in

The active grading ensemble framework for learning visual quality inspection from multiple humans

  • Industrial and Commercial Application
  • Published:
Pattern Analysis and Applications Aims and scope Submit manuscript

Abstract

When applying machine learning technology to real-world applications, such as visual quality inspection, several practical issues need to be taken care of. One problem is posed by the reality that usually there are multiple human operators doing the inspection, who will inevitably contradict each other for some of the products to be inspected. In this paper an architecture for learning visual quality inspection is proposed which can be trained by multiple human operators, based on trained ensembles of classifiers. Most of the applicable ensemble techniques have however difficulties learning in these circumstances. In order to effectively train the system a novel ensemble framework is proposed as an enhancement of the grading ensemble technique—called active grading. The active grading algorithms are evaluated on data obtained from a real-world industrial system for visual quality inspection of the printing of labels on CDs, which was labelled independently by four different human operators and their supervisor, and compared to the standard grading algorithm and a range of other ensemble (classifier fusion) techniques.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. \(\mathbf{x}\) will be used to denote a data item, described by the appropriate features for the current classifier (if the classifiers are trained using different features).

References

  1. Sablatnig R (1997) A highly adaptable concept for visual inspection. PhD thesis, Vienna University of Technology

  2. Stephens JC (2003) Improve your visual inspection program. Manufacturing Engineering. Society of Manufacturing Engineers

  3. Mery D, Medina O (2004) Automated visual inspection of glass bottles using adapted median filtering. In: Campilho AC, Kamel MS (eds) Image analysis and recognition. Lecture notes in computer science, vol 3212. Springer, Berlin, pp 818–825.

    Chapter  Google Scholar 

  4. Malamas EN, Petrakis EGM, Zervakis M, Petit L, Legat J-D (2003) A survey on industrial vision systems, applications and tools. Image Vis Comput 21:171–188

  5. Castillo E, Alvarez E (2007) Expert systems: uncertainty and learning. Springer, New York

    Google Scholar 

  6. Duda RO, Hart PE, Stork DG (2000) Pattern classification, 2nd edn. Wiley, New York

    Google Scholar 

  7. Juran JM, Gryna FM (1988) Juran’s quality control handbook, 4th edn. McGraw-Hill, New York

    Google Scholar 

  8. Govindaraju M, Pennathur A, Mital A (2001) Quality improvement in manufacturing through human performance enhancement. Integr Manuf Syst 12(5):360–367

    Google Scholar 

  9. Kuncheva LI (2004) Combining pattern classifiers: methods and algorithms. Wiley, New York

    Book  MATH  Google Scholar 

  10. Polikar R (2006) Ensemble based systems in decision making. IEEE Circuits Syst Mag 6(3):21–45

    Article  Google Scholar 

  11. Seewald AK, ürnkranz JF (2001) An evaluation of grading classifiers. In: Proceedings of the 4th international conference on advances in intelligent data analysis, vol 2189. Springer, Berlin, pp 115–124

  12. Sannen D, Nuttin M, Smith J, Tahir MA, Caleb-Solly P, Lughofer E, Eitzinger C (2008) An on-line interactive self-adaptive image classification framework. In: Gasteratos A, Vincze M, Tsotsos JK (eds) Computer vision systems. Lecture notes in computer science, vol 5008. Springer, Berlin, pp 171–180

    Chapter  Google Scholar 

  13. Eitzinger C, Heidl W, Lughofer E, Raiser S, Smith JE, Tahir MA, Sannen D, Van Brussel H (2013) Assessment of the influence of adaptive components in trainable surface inspection systems. Machine Vis Appl 21(5):613–626

    Google Scholar 

  14. Caleb-Solly P, Smith JE (2007) Adaptive surface inspection via interactive evolution. Image Vis Comput 25(7):1058–1072

    Article  Google Scholar 

  15. Breiman L (1996) Bagging predictors. Mach Learn 24(2):123–140

    MathSciNet  MATH  Google Scholar 

  16. Opitz D (1999) Feature selection for ensembles. In: Proceedings of 16th national conference on artificial intelligence (AAAI), pp 379–384

  17. Melville P, Mooney RJ (2005) Creating diversity in ensembles using artificial data. Inf Fusion 6(1):99–111

    Article  Google Scholar 

  18. Valentini G, Masulli F (2002) Ensembles of learning machines. In: Marinaro M, Tagliaferri R (eds) 13th Italian workshop on neural nets. Lecture notes in computer science, vol 2486. Springer, Berlin, pp 3–22

    Google Scholar 

  19. Freund Y, Schapire RE (1997) A decision-theoretic generalization of on-line learning and an application to boosting. J Comput Syst Sci 55(1):119–139

    Article  MathSciNet  MATH  Google Scholar 

  20. Breiman L (2001) Random forests. Mach Learn 45(1):5–32

    Article  MATH  Google Scholar 

  21. Woods K, Kegelmeyer WP, Bowyer K (1997) Combination of multiple classifiers using local accuracy estimates. IEEE Trans Patt Anal Mach Intell 19(4):405–410

    Article  Google Scholar 

  22. Jacobs R, Jordan M, Nowlan SJ, Hinton GE (1991) Adaptive mixtures of local experts. Neural Comput 3:79–87

    Article  Google Scholar 

  23. Kuncheva LI (2002) Switching between selection and fusion in combining classifiers: an experiment. IEEE Trans Syst Man Cybern Part B Cybern 32(2):146–156

    Article  Google Scholar 

  24. Kuncheva LI, Bezdek JC, Duin RPW (2001) Decision templates for multiple classifier fusion: an experimental comparison. Pattern Recognit 34(2):299–314

    Article  MATH  Google Scholar 

  25. Xu L, Krzyźak A, Suen CY (1992) Methods of combining multiple classifiers and their applications to handwriting recognition. IEEE Trans Syst Man Cybern 22(3):418–435

    Article  Google Scholar 

  26. Kuncheva LI, Whitaker CJ, Shipp CA, Duin RPW (2003) Limits on the majority vote accuracy in classifier fusion. Pattern Anal Appl 6(1):22–31

    Article  MathSciNet  MATH  Google Scholar 

  27. Kittler JV, Hatef M, Duin RPW, Matas J (1998) On combining classifiers. IEEE Trans Pattern Anal Mach Intell 20(3):226–239

    Article  Google Scholar 

  28. Kuncheva LI (2002) A theoretical study on six classifier fusion strategies. IEEE Trans Pattern Anal Mach Intell 24(2):281–286

    Article  Google Scholar 

  29. Cho SB, Kim JH (1995) Combining multiple neural networks by fuzzy integral for robust classification. IEEE Trans Syst Man Cybern 25(2):380–384

    Article  Google Scholar 

  30. Gader PD, Mohamed MA, Keller JM (1996) Fusion of handwritten word classifiers. Pattern Recognit Lett 17(6):577–584

    Article  Google Scholar 

  31. Kuncheva LI (2001) Using measures of similarity and inclusion for multiple classifier fusion by decision templates. Fuzzy Sets Syst 122(3):401–407

    Article  MathSciNet  MATH  Google Scholar 

  32. Rogova GL (1994) Combining the results of several neural network classifiers. Neural Netw 7(5):777–781

    Article  Google Scholar 

  33. Sannen D, Van Brussel H, Nuttin M (2007) Classifier fusion using discounted Dempster–Shafer combination. In: Poster proceedings of the 5th international conference on machine learning and data mining, pp 216–230

  34. Laskov P, Gehl C, Krüger S, Müller K-R (2006) Incremental support vector learning: analysis, implementation and applications. J Mach Learn Res 7:1909–1936

    MathSciNet  MATH  Google Scholar 

  35. Diehl CP, Cauwenberghs G (2003) SVM incremental learning, adaptation and optimization. In: Proceedings of the 2003 international joint conference on neural networks, pp 2685–2690

  36. Kalles D, Morris T (1996) Efficient incremental induction of decision trees. Mach Learn 24:231–242

    Google Scholar 

  37. Utgoff PE (1989) Incremental induction of decision trees. Mach Learn 4:161–186

    Article  Google Scholar 

  38. Pang S, Ozawa S, Kasabov N (2005) Incremental linear discriminant analysis for classification of data streams. IEEE Trans Syst Man Cybern Part B Cybern 35(5):905–914

    Article  Google Scholar 

  39. Lughofer E (2008) Extensions of vector quantization for incremental clustering. Pattern Recognit 41(3):995–1011

    Article  MATH  Google Scholar 

  40. Cristianini N, Shawe-Taylor J (2000) An introduction to support vector machines (and other Kernel-based learning methods). Cambridge University Press, Cambridge

    Book  Google Scholar 

  41. Breiman L, Friedman JH, Olshen RA, Stone CJ (1984) Classification and regression trees. Wadsworth International Group, Belmont

  42. Kohavi R (1995) A study of cross-validation and bootstrap for accuracy estimation and model selection. In: Mellish CS (ed) Proceedings of the 14th international joint conference on artificial intelligence. Morgan Kaufmann, Massachusetts, pp 1137–1145

Download references

Acknowledgements

This work was partly supported by the European Commission (project Contract No. STRP016429, acronym DynaVis). This publication reflects only the authors’ views.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Davy Sannen.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Sannen, D., Van Brussel, H. The active grading ensemble framework for learning visual quality inspection from multiple humans. Pattern Anal Applic 16, 223–234 (2013). https://doi.org/10.1007/s10044-013-0321-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10044-013-0321-2

Keywords