Skip to main content

Advertisement

Log in

A flexible machine vision system for small part inspection based on a hybrid SVM/ANN approach

  • Published:
Journal of Intelligent Manufacturing Aims and scope Submit manuscript

Abstract

Machine vision inspection systems are often used for part classification applications to confirm that correct parts are available in manufacturing or assembly operations. Support vector machines (SVMs) and artificial neural networks (ANNs) are popular choices for classifiers. These supervised classifiers perform well when developed for specific applications and trained with known class images. Their drawback is that they cannot be easily applied to different applications without extensive retuning. Moreover, for the same application, they do not perform well if there are unknown class images. This paper proposes a novel solution to the above limitations of SVMs and ANNs, with the development of a hybrid approach that combines supervised and semi-supervised layers. To illustrate its performance, the system is applied to three different small part identification and sorting applications: (1) solid plastic gears, (2) clear plastic wire connectors and (3) metallic Indian coins. The ability of the system to work with different applications with minimal tuning and user inputs illustrates its flexibility. The robustness of the system is demonstrated by its ability to reject unknown class images. Four hybrid classification methods were developed and tested: (1) SSVM–USVM, (2) USVM–SSVM, (3) USVM–SANN and (4) SANN–USVM. It was found that SANN–USVM gave the best results with an accuracy of over 95% for all three applications. A software package known as FlexMVS for flexible machine vision system was written to illustrate the hybrid approach that enabled easy execution of the image conditioning, feature extraction and classification steps. The image library and database used in this study is available at http://my.me.queensu.ca/People/Surgenor/Laboratory/Database.html.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

References

  • Abdullah, M., Aziz, S., & Mohammad, A. (2000). Quality inspection of bakery products using a color-based machine vision system. Journal of Food Quality,23(1), 39–50.

    Google Scholar 

  • Akhtar, A., Khanum, A., Khan S.A., & Shaukat, A. (2013). Automated plant diesease analysis (APDA): Performance comparision of machine learning techniques. In 11th International conference on frontiers of information technology (pp. 61–65). Islamabad, Pakistan.

  • Antkowiak, M. (2006). Artificial neural network vs. support vector machines for skin diseases recognition. Master Thesis, Department of Computing Science, Umea University, Sweden.

  • Atherton, T. J., & Kerbyson, D. J. (1999). Using phase to represent radius in the coherent circle hough transform. Image and Vision Computing,17, 795–803.

    Google Scholar 

  • ATS automation (2018). High speed vision inspection success story. resource. https://www.atsautomation.com/en/Life Sciences/Success Stories/High Speed Vision Inspection.aspx Accessed 26 Feb 2018.

  • Batchelor, B. G. (2012). Machine vision for industrial applications. machine vision handbook, Chapter 1 (pp. 1–59). London: Springer.

    Google Scholar 

  • Bianchini, M., Scarselli, F. (2014). On the complexity of shallow and deep neural network classifiers. In European symposium on artificial neural networks, computational intelligence and machine learning (ESANN) (pp. 371–376). Bruges, Belgium.

  • Blum, A. L., & Langley, P. (1997). Selection of relevant features and examples in machine learning. Artificial Intelligence,97, 245–271.

    Google Scholar 

  • Cao, L., Xiang, M., Feng, H., & Wang, Y. (2015). Size sorting and measurement system of safety belt pin based on machine vision. Applied Mechanics and Materials,741, 709–713.

    Google Scholar 

  • Chauhan V., Joshi K. D., Surgenor B. (2017) Machine vision for coin recognition with ANNs: Effect of training and testing parameters. In: G. Boracchi, L. Iliadis, C. Jayne, A. Likas (eds.) Engineering applications of neural networks. EANN 2017. Communications in computer and information science (vol 744). Springer, Cham.

    Google Scholar 

  • Chauhan, V., & Surgenor, B. (2015). A comparative study of machine vision based methods for fault detection in an automated assembly machine. Procedia Manufacturing,1, 416–428.

    Google Scholar 

  • Chen, S. H., & Perng, D. B. (2016). Automatic optical inspection system for IC molding surface. Journal of Intelligent Manufacturing,27(5), 915–926.

    Google Scholar 

  • Chetima, M. M., Payeur, P. (2008). Feature selection for a real-time vision-based food inspection system. In IEEE international workshop on robotic and sensors environments (pp. 120–125). Ottawa, Canada

  • Chetima, M. M., Payeur, P. (2012). Automated tuning of a vision-based inspection system for industrial food manufacturing. In Instrumentation and Measurement Technology Conference (I2MTC) (pp. 210–215) Graz, Austria.

  • Cooray, T., Fernando, S. (2011). Visual based automatic coin counting system. In: SAITM research symposium on engineering advancements (pp. 55–58). Malabe, Sri Lanka.

  • Dash, M., & Liu, H. (2003). Consistency-based serach in feature selection. Artificial Intelligence,151, 155–176.

    Google Scholar 

  • Drury, C. G. (1973). The effect of speed of working on industrial inspection accuracy. Applied Ergonomics,4(1), 2–7.

    Google Scholar 

  • Fukumi, M., Omatu, S., Takeda, F., & Kosaka, T. (1992). Rotation-invariant neural pattern recognition system with application to coin recognition. IEEE Transcations on Neural Networks,3(2), 272–279.

    Google Scholar 

  • Hosseininia, S., Khalili, K., Emam, S. (2016). Flexible automation in porcelain edge polishing using machine vision. In Procedia Technology, and 9th international conference interdisciplinarity in engineering (INTER-ENG 2015) (vol. 22, pp. 562–569). Tirgu-Mures, Romania.

  • Hua, J., Xiong, Z., Lowey, J., Suh, E., & Dougherty, E. (2005). Optimal number of features as a function of sample size for various classification rules. Bioinformatics,21(8), 1509–1515.

    Google Scholar 

  • Huang, S., & Pan, Y. (2015). Automated visual inspection in the semiconductor industry: a survey. Computers in Industry,66, 1–10.

    Google Scholar 

  • Joshi, K. D. (2018). A flexible machine vision system for small parts inspection based on a hybrid SVM/ANN approach. Doctoral Thesis, Department of Mechanical and Materials Engineering, Queen’s University, Canada.

  • Joshi, K. D., Chauhan, V. and Surgenor, B. (2016). Real time recognition and counting of Indian currency coins using machine vision: a preliminary analysis. In Proceedings of the canadian society for mechanical engineering international congress. Kelowna, Canada.

  • Keys, R. (1981). Cubic convolution interpolation for digital image processing. IEEE Transactions on Acoustics, Speech, and Signal Processing,29(6), 1153–1160.

    Google Scholar 

  • Kim, T. H., Cho, T. H., Moon, Y. S., & Park, S. H. (1999). Visual inspection system for the classification of solder joints. Pattern Recognition,32(4), 565–575.

    Google Scholar 

  • Leemans, V., Magein, H., & Destain, M. F. (2002). On-line fruit grading according to their external quality using machine vision. Biosystems Engineering,83(4), 397–404.

    Google Scholar 

  • Li., Z., Chang, S., Liang, F. (2013). Learning locally-adaptive decision functions for person verification. In: IEEE Conference on Computer Vision and Pattern Recognition (pp. 3610–3617). IEEE Computer Society, Portland, USA.

  • Li, X., Guo, Y. (2013). Adaptive learning for image classification. In IEEE conference on computer vision and pattern recognition (pp. 859–866). IEEE Computer Society, Portland, USA.

  • Malamas, E., Petrakis, E., Zervakis, M., Petit, L., & Legat, J. (2003). A survey on industrial vision systems, applications and tools. Image and Vision Computing,21(2), 171–188.

    Google Scholar 

  • Modi, S. (2011). Automated coin recognition system using ANN. Master’s Thesis, Thapar University, Computer Science and Engineering Department, Patiala, India.

  • Modi, S., & Bawa, S. (2011). Automated coin recognition system using ANN. International Journal of Computer Applications,26(4), 13–18.

    Google Scholar 

  • Modi, S., & Bawa, S. (2012). Image processing based systems and techniques for the recognition of ancient and modern coins. International Journal of Computer Applications,47(10), 1–5.

    Google Scholar 

  • Nashat, S., Abdullah, A., Aramvith, S., & Abdullah, M. Z. (2011). Support vector machine approach to real-time inspection of biscuits on moving conveyor belt. Computers and Electronics in Agriculture,75(1), 147–158.

    Google Scholar 

  • Nerakae, P., Uangpairoj, P., Chamniprasart, K. (2016). Using machine vision for flexible automatic assembly system. In Procedia computer science, and 20th international conference on knowledge based and intelligent information and engineering systems (KES2016) (Vol. 96, pp. 428–435), York, UK.

  • Nguyen, A., Yosinski, J., Clune, J. (2015). Deep neural networks are easily fooled: High confiedence predictions for unrecognizable images. In Conference on Computer Vision and Pattern Recognition (pp. 427–436). Boston, USA.

  • Nielsen, M. A. (2015). Neural networks and deep learning. Electronic book. Oxford: Determination Press.

    Google Scholar 

  • Niklaus, P., Ulli, G. (2015). Automated resistor classification. Group Thesis, Swiss Federal Institute of Technology, Computer Engineering and Networks Laboratory, Zurich, Switzerland.

  • Nilsback, M. E., Zisserman, A. (2006). A visual vocabulary for flower classification. In IEEE Conference on Computer Vision and Pattern Recognition (Vol. 2, 99. 1447–1454). IEEE Computer Society. New York, USA.

  • Otsu, N. (1979). A Threshold selection method from grey-level histograms. IEEE Transactions on Systems, Man and Cybernetics,9(1), 62–66.

    Google Scholar 

  • Park, H. K. (2015). Machine vision system for effective semiconductor package sorting. Advanced Science and Technology Letters,120, 408–411.

    Google Scholar 

  • Penaranda, J., Briones, L., & Florez, J. (1997). Colour machine vision system for process control in ceramics industry. Lasers and Optics in Manufacturing,3, 182–192.

    Google Scholar 

  • Piementel, M. A., Clifton, D. A., Clifton, L., & Tarassenko, L. (2014). A review of novelty detection. Signal Processing,99, 215–249.

    Google Scholar 

  • Ren, J. (2012). ANN vs. SVM: which one performs better in classification of MCCs in mammogram imaging. Knowledge-Based Systems,26, 144–153.

    Google Scholar 

  • Rosati, G., Facciao, M., Carli, A., & Rossi, A. (2013). Fully flexible assembly systems (F-FAS): A new concept in flexible automation. Assembly Automation,33(1), 8–21.

    Google Scholar 

  • Schlipsing, M., Salmen, J., Tschentscher, M., & Igel, C. (2014). Adaptive Pattern Recognition in Real-Time Video-Based Soccer Analysis. Journal of Real-Time Image Processing,19, 1–17.

    Google Scholar 

  • Schmidhuber, J. (2014). Deep learning in neural networks: An overview. University of Lugano and SUPSI, The Swiss AI Lab IDSIA, Switzerland.

  • Schoonahd, J. W., Gould, J. D., & Miller, L. A. (2007). Studies of Visual Inspection. Ergonomics,16(4), 365–379.

    Google Scholar 

  • Shafer, D. A. (1999). Successful assembly automation. Defining the assembly process (pp. 47–76). Dearborn: Society of Manufacturing Engineers.

    Google Scholar 

  • Shen, H., Li, S., Gu, D., & Chang, H. (2012). Bearing defect inspection based on machine vision. Measurement,45(4), 719–733.

    Google Scholar 

  • Sokolava, M., & Lapalme, G. (2009). A systematic analysis of performance measures for classification tasks. Information Processing and Management,45, 427–437.

    Google Scholar 

  • Su, J. C., & Tarng, Y. S. (2008). Automated Visual inspection for surface appearance defects of varistors using an adaptive neuro-fuzzy inference system. International Journal of Advanced Manufacturing Technology,35, 789–802.

    Google Scholar 

  • Sun, T. H., Tien, F. C., Tien, F. C., & Kuo, R. J. (2016). Automated thermal fuse inspection using machine vision and artificial neural networks. Journal of Intelligent Manufacturing,27(3), 639–651.

    Google Scholar 

  • Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D.,Erhan D.,Vanhoucke V., Rabinovich, A. (2015). Going deeper with convolutions. In Conference on computer vision and pattern recognition (CVPR), 1-9, Boston, USA.

  • Tapilouw, A., Chang, Y., Liu, H., Wang, H., Tai, H. (2015). White light triangulation sensor for flexible inspection system. In: 9th international conference on sensing technology, IEEE (pp. 765–768). Auckland, New Zealand.

  • Tessier, J., Duchesne, C., & Bartolacci, G. (2007). A Machine vision approach to on-line estimation of run-of-mine ore composition on conveyor belts. Minerals Engineering,20, 1129–1144.

    Google Scholar 

  • Tuytelaars, T., & Mikolajczyk, K. (2008). Local invarient feature detectors: A survey. Foundations and Trends in Computer Graphics and Vision,3(3), 177–280.

    Google Scholar 

  • Vapnik, V., Steven, E., & Smola, A. (1996). Support vector method for function approximation, regression estimation, and signal processing. Advances in Neural Information Processing Systems,9, 281–287.

    Google Scholar 

  • Velu, C., Vivekanandan, P., & Kashwan, K. (2011). Indian coin recognition and sum counting system of image data mining using artificial neural networks. International Journal of Advanced Science and Technology,31, 67–80.

    Google Scholar 

  • Wang, Q., Ma, L., Gao, Q., Li, Y., Huang, Y., & Liu, Y. (2017). Adaptive maximum margin Analysis for Image Recognition. Pattern Recognition,61, 339–347.

    Google Scholar 

  • Weigl, E., Heidl, W., Lughofer, E., Radauer, T., & Eitzinger, C. (2016). On improving performance of surface inspection systems by online active learning and flexible classifier updates. Machine Vision and Applications,27, 103–127.

    Google Scholar 

  • Wilder, J. (1989). Industrial applications of machine vision. In Issues on machine vision, CISM courses and lectures No. 307, international centre for mechanical sciences (pp. 311–339).

    Google Scholar 

  • Wu, W., Wang, X., Huang, G., & Xu, D. (2015). Automatic gear sorting system based on monocular vision. Digital Communications and Networks,1(4), 284–291.

    Google Scholar 

  • Yan, M., Surgenor, B. (2011). A quantitative study of illumination techniques for machine vision based inspection. In ASME 6th Manufacturing science and engineering conference (MSEC2011-50178) (pp. 281–288). Oregon, USA.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Keyur D. Joshi.

Appendix: FlexMVS software overview

Appendix: FlexMVS software overview

FlexMVS is the software developed for small part classification using the hybrid SVM/ANN approach documented in this paper. It is written in MATLAB 2017a. The results presented in this paper were generated with FlexMVS. This appendix sets out to illustrate its ease of use, list the required actions to generate a classification run and outline the options available to the user.

The main graphical user interface (GUI) of the system is shown in Fig. 16. As keyed to the labels in Fig. 16, the required actions by the user to setup a run are:

Fig. 16
figure 16

Main GUI of FlexMVS, with required actions labelled

  1. (a)

    Select path of the training and testing folder, which contains the original image database as explained in “Design of the image database” section.

  2. (b)

    Input size of the conditioned images, as per the guidelines in “Design of the image database” section. A subsequent press of the ‘Condition’ button will initiate the conditioning step and the original image database will be replaced with a conditioned image database.

  3. (c)

    Input size of the smallest and largest part, again following the guidelines in “Design of the image database” section. A subsequent press of the ‘Extract’ button will result in a prompt to the user to enter labels, which in turn will initiate the feature extraction step, as outlined in “Feature selection and extraction” section.

  4. (d)

    Select one of the four classification methods. A subsequent press of the ‘Classify’ button will initiate the classification step, as outlined in “Classification methods” section.

  5. (e)

    Once classification is complete, the results are displayed: i.e. values of accuracy, FPs and FNs. A user can select another classification method and press the ‘Classify’ button again to get a new set of results.

After completing a classification run, the user has options to investigate the results in more detail. The four options are selected by tabs given in the bottom half of Fig. 16, labelled:

  1. 1.

    Plot Features

  2. 2.

    Confusion Matrix

  3. 3.

    List FPs & FNs

  4. 4.

    Hybrid-Model

The sub window for the ‘Plot Features’ tab appears in Fig. 16. There are two switches that appear as toggle icons. The first toggle is to select training or testing dataset features as the plot. The second toggle is to select FPs or FNs as the plot. The feature plot for training and testing will be similar to that shown in Figs. 7 and 12, respectively. The plot of FPs and FNs will be similar to the feature plots, but with fewer classes (misclassified). For example, Fig. 17 shows plots of 3.8% FNs (from CM of E9 in Table 4) for the coin application. Comparing median value plots of Fig. 17 with the median value plots of Fig. 9 clearly confirms that feature values of class C2 and C3 do not align with the median value plots of training images and hence, they were misclassified into OT class.

Fig. 17
figure 17

FNs plot obtained from ‘Plot Features’ utility

If the ‘Confusion Matrix’ tab is selected, the sub-window shown in Fig. 18 will appear. Pressing the ‘Compute and Show’ button will display the confusion matrix for the current run, in a format similar to Table 4.

Fig. 18
figure 18

Sub-window under confusion matrix tab

If the ‘List FPs and FNs’ tab is selected, the sub-window shown in Fig. 19 will appear. If the ‘Show FNs’ button is pressed then a list would appear as shown in the Fig. 19, that provides name of the FNs images. If there are no FNs, a message would appear stating that ‘No False Negatives Found: Try False Positives instead. You may have 100 percentage accuracy. OR You need to execute Classification step first’. Similar message would appear, if there were no FPs in the classification run and the user pressed ‘Show FPs’ button.

Fig. 19
figure 19

Sub-window under List FPs and FNs tab

If the ‘Hybrid-Model’ tab is selected, the sub-window shown as Fig. 20 will appear. A press of the ‘Prepare Hybrid-Model’ button will initiate the model building process. When the preparation step is complete, a press of the ‘Test Hybrid-Model’ button will open the GUI shown as Fig. 21. The user must then select a test image by pressing the ‘Browse’ button. The selected image will then appear, as shown in Fig. 22. When the ‘Predict’ button is pressed, FlexMVS will apply the hybrid-model’s algorithms to the selected test image and will provide a decision regarding the class of the image. For example, Fig. 22 shows the selected coin image as Class C3 because its features were in line with the training features of the Class C3. In the example of Fig. 23, the selected ‘non-Indian (Canadian)’ coin image was assigned to OT class because in this example, FlexMVS had been trained on the Indian coin database.

Fig. 20
figure 20

Sub-window under hybrid-model tab

Fig. 21
figure 21

GUI for test hybrid-model before image selection

Fig. 22
figure 22

GUI for test hybrid-model after ‘Predict’ is pressed

Fig. 23
figure 23

GUI for test hybrid-model of an OT class

As a final point, Figs. 22 and 23 shows the time taken to classify the single test image (0.14776 s in the case of the Indian coin and 0.14969 s in the case of Canadian coin example). The time of 0.15 s to classify a single part image is equivalent to an inspection rate of 400 parts per min. This provides a baseline for the minimum production rate.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Joshi, K.D., Chauhan, V. & Surgenor, B. A flexible machine vision system for small part inspection based on a hybrid SVM/ANN approach. J Intell Manuf 31, 103–125 (2020). https://doi.org/10.1007/s10845-018-1438-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10845-018-1438-3

Keywords