Skip to main content
Log in

A computational model of vision attention for inspection of surface quality in production line

  • Original Paper
  • Published:
Machine Vision and Applications Aims and scope Submit manuscript

Abstract

There is need to detect regions of small defects in a large background, when product surface quality in line is inspected by machine vision systems. A computational model of visual attention was developed for solving the problem, inspired by the behavior and the neuronal architecture of human visual attention. Firstly, the global feature was extracted from input image by law’s rules, then local features were extracted and evaluated with an improved saliency map model of Itti. The local features were fused into a single topographical saliency map by a multi-feature fusion operator differenced from Itti model, in which the better feature has the higher weighting coefficient and more contribution to fusion of the feature’s images. Finally, the regions were “popped out” in the map. Experimental results show that the model can locate regions of interest and exclude the most background regions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Li G., Su Z., Xia X.: Algorithm for inspection of white foreign fibers in cotton by machine vision with irregular imaging function. Trans. Chin. Soc. Agric. Mach. 43(05), 164–167 (2010)

    Google Scholar 

  2. Su, Z., Wang, J., Huang, M., Guan, B.: A machine vision system with an irregular imaging function. In: The 5th International Symposium on Image and Signal Processing and Analysis, Istanbul, Turkey, pp. 458–463 (2007)

  3. Xu K., Xu J., Chen Y.: On-line surface defect inspection system for cold rolled strips. J. Beijing Univer. Sci. Technol. 24(3), 329–332 (2002)

    Google Scholar 

  4. Borji A., Ahmadabadi M.N., Araabi B.N.: Cost-sensitive learning of top–down modulation for attentional control. Mach. Vis. Appl. 22, 61–76 (2011)

    Article  Google Scholar 

  5. Yuan K., Xiao H., He W.: Survey on machine vision systems based on FPGA. Comput. Eng. Appl. 46(36), 1–6 (2010)

    Google Scholar 

  6. Rajan B., Ravi S.: FPGA based hardware implementation of image filter with dynamic reconfiguration architecture. Int. J. Comput. Sci. Netw. Secur. 6(12), 121–127 (2006)

    Google Scholar 

  7. Pankiewicz, P., Powiretowski, W., Roszak, G.: VHDL implementation of the lane detection algorithm. In: 15th International Conference on Mixed Design of Integrated Circuits and Systems (MIXDES), Poland, pp. 581–584 (2008)

  8. Tabata T., Komuro T., Ishikawa M.: Surface image synthesis of moving spinning cans using a 1,000-fps area scan camera. Mach. Vis. Appl. 21, 643–652 (2010)

    Article  Google Scholar 

  9. Watanabe, Y., Komuro, T., Ishikawa, M.: A high-speed vision system for moment-based analysis of numerous objects. In: Proceedings of the IEEE International Conference on Image Processing, Piscataway, NJ,USA, IEEE, pp. 177–180 (2007)

  10. Quinton J.C.: A generic library for structured real-time computations: GPU implementation applied to retinal and cortical vision processes. Mach. Vis. Appl. 21, 529–540 (2010)

    Article  Google Scholar 

  11. Martínez-Zarzuela M., Díaz-Pernas F.J., Antón-Rodríguez M., Díez-Higuera J.F., González-Ortega D., Boto-Giralda D., López-González F., DeLa Torre I.: Multi-scale neural texture classification using the GPU as a stream processing engine. Mach. Vis. Appl. 22, 947–966 (2011)

    Article  Google Scholar 

  12. Karuppiah D.R., Grupen R.A., Zhu Z., Hanson A.R.: Automatic resource allocation in a distributed camera network. Mach. Vis. Appl. 21, 517–528 (2010)

    Article  Google Scholar 

  13. Aiger, D., Talbot, H.: The phase only transform for unsupervised surface defect detection. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), San Francisco, CA, pp. 295–302 (2010)

  14. Zontak, M., Cohen, I.: Kernel-based detection of defects on semiconductor wafers. In: IEEE International Workshop on Machine Learning for Signal Processing (MLSP), Grenoble, pp. 1–6 (2009)

  15. Golkar E., Patel A., Yazdi L., Prabuwono A.S.: Ceramic tile border defect detection algorithms in automated visual inspection system. J. Am. Sci. 7(6), 542–550 (2011)

    Google Scholar 

  16. Vizireanu D.N., Halunga S., Marghescu G.: Morphological skeleton decomposition interframe interpolation method. J. Electron. Imaging 19(2), 1–3 (2010)

    Article  Google Scholar 

  17. Vizireanu D.N.: Morphological shape decomposition interframe interpolation method. J. Electron. Imaging 17(1), 1–5 (2008)

    Article  Google Scholar 

  18. Vizireanu N., Udrea R.: Visual-oriented morphological foreground content grayscale frames interpolation method. J. Electron. Imaging 18(2), 1–3 (2009)

    Article  Google Scholar 

  19. Eitzinger C., Heidl W., Lughofer E., Raiser S., Smith J.E., Tahir M.A., Sannen D., Van Brussel H.: Assessment of the influence of adaptive components in trainable surface inspection systems. Mach. Vis. Appl. 21, 613–626 (2010)

    Article  Google Scholar 

  20. Thorpe S., Fize D., Marlot C.: Speed of processing in the human visual system. Nature 381, 520–522 (1996)

    Article  Google Scholar 

  21. Itti L., Koch C., Niebur E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 20, 1254–1259 (1998)

    Article  Google Scholar 

  22. Itti L., Koch C.: Computational modeling of visual attention. Nat. Rev. Neurosci. 2(3), 194–230 (2001)

    Article  Google Scholar 

  23. Walther D., Koch C.: Modeling attention to salient proto-objects. Neural Netw. 19, 1395–1407 (2006)

    Article  MATH  Google Scholar 

  24. Sun Y., Fisher R.: Object-based visual attention for computer vision. Artif. Intell. 146(1), 77–123 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  25. Harel, J., Koch, C., Perona, P.: Graph-based visual saliency. In: Proceedings of Neural Information Processing Systems (NIPS), pp. 545–552 (2006)

  26. Hou, X., Zhang, L.: Saliency detection: a spectral residual approach. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–8 (2007)

  27. Hou X., Harel J., Koch C.: Image signature: highlighting sparse salient regions. IEEE Trans. Pattern Anal. Mach. Intell. 34(1), 194–201 (2012)

    Article  Google Scholar 

  28. Zhang L., Tong M.H., Marks T.K., Shan H., Cottrell G.W.: SUN: a Bayesian framework for saliency using natural statistics. J. Vis. 8(7), 1–20 (2008)

    Article  Google Scholar 

  29. Achanta, R., Hemami, S., Estrada, F., Susstrunk, S.: Frequency-tuned salient region detection. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Miami Beach, FL, USA, pp. 1597–1604 (2009)

  30. Gao D., Han S., Vasconcelos N.: Discriminant saliency, the detection of suspicious coincidences and applications to visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 31(6), 989–1005 (2009)

    Article  Google Scholar 

  31. Jacobson N., Lee Y.-L., Mahadevan V., Vasconcelos N., Nguyen T.Q.: A novel approach to FRUC using discriminant saliency and frame Segmentation. IEEE Trans. Image Process. 19(11), 2924–2934 (2010)

    Article  MathSciNet  Google Scholar 

  32. Frintrop, S., Rome, E.: Simulating visual attention for object recognition. In: The Workshop on Early Cognitive Vision, Isle of Skye, Scotland (2004)

  33. Zhang Q., Gu G., Xiao H.: Computational model of visual selective attention. Robot 31(6), 574–580 (2009)

    Google Scholar 

  34. Wang, L.: Feature extraction and classification for images. Xidian University (2006)

  35. Jain R., Kasturi R., Schunck B.G.: Machine Vision, pp. 140–185. McGraw-Hill, New York (1995)

    Google Scholar 

  36. Torralba A., Oliva A., Castelhano M., Henderson J.: Contextual guidance of eye movements and attention in real-world scenes: the role of global features in object search. Psychol. Rev. 113(4), 766–784 (2006)

    Article  Google Scholar 

  37. Zhang, Y.: Research on the multi-spectral remote sensing image fusion technology. Northwestern Polytechnical University (2006)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guohui Li.

Additional information

This work was supported by the university’s scientific research project of Sichuan Normal University (10MSL07) and key research project of Sichuan Normal University (2010).

Rights and permissions

Reprints and permissions

About this article

Cite this article

Li, G., Shi, J., Luo, H. et al. A computational model of vision attention for inspection of surface quality in production line. Machine Vision and Applications 24, 835–844 (2013). https://doi.org/10.1007/s00138-012-0429-1

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00138-012-0429-1

Keywords

Navigation