Skip to main content
Log in

Serial number inspection for ceramic membranes via an end-to-end photometric-induced convolutional neural network framework

  • Published:
Journal of Intelligent Manufacturing Aims and scope Submit manuscript

Abstract

The ceramic membrane plays an important role in the wastewater disposal industry. The serial number engraved on each ceramic membrane is an essential feature for identification. Here, an automatic inspection system for serial numbers of ceramic membranes is proposed to replace the manual inspection. To the best of our knowledge, this is the first attempt to automatically inspect serial numbers of ceramic membranes. To suppress error accumulation inherently existed in the previous stepwise approaches, an end-to-end photometric-induced convolutional neural network framework is proposed for this automatic inspection system. The framework consists of three sequential stages, which are photometric stage for performing photometric stereo, localization stage for localizing the text region, and recognition stage for producing recognition results. The photometric stage can integrate three-dimensional shape information of serial numbers of ceramic membranes into the framework to improve the inspection performance. Since three stages are jointly trained, a theoretical analysis on the contributions of the local losses is provided to ensure the convergence of the framework, which can guide the design of the total loss function of the framework. Experimental results demonstrate that the proposed framework achieves better inspection performance with a reasonable inspection time compared with the state-of-the-art deep learning methods, whose localization performance and recognition performance are the F-score of 95.61% and the accuracy of 96.49%, respectively. Furthermore, these demonstrate the potential that our proposed automatic inspection system will be beneficial for the intelligentialize of the ceramic membrane manufacturing and wastewater treatment if it is equipped with a perception system and a control system in ceramic membrane production lines and wastewater treatment processes.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

References

  • Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., et al. (2016). Tensorflow: A system for large-scale machine learning. In Proceedings of the 12th USENIX symposium on operating systems design and implementation, 2016 (pp. 265–283). Retrieved November 10, 2019, from https://www.usenix.org/system/files/conference/osdi16/osdi16-abadi.pdf.

  • Abadi, S. R. H., Sebzari, M. R., Hemati, M., Rekabdar, F., & Mohammadi, T. (2011). Ceramic membrane performance in microfiltration of oily wastewater. Desalination, 265(1–3), 222–228. https://doi.org/10.1016/j.desal.2010.07.055.

    Article  Google Scholar 

  • Badmos, O., Kopp, A., Bernthaler, T., & Schneider, G. (2020). Image-based defect detection in lithium-ion battery electrode using convolutional neural networks. Journal of Intelligent Manufacturing, 31(4), 885–897. https://doi.org/10.1007/s10845-019-01484-x.

    Article  Google Scholar 

  • Badrinarayanan, V., Kendall, A., & Cipolla, R. (2017). Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(12), 2481–2495. https://doi.org/10.1109/TPAMI.2016.2644615.

    Article  Google Scholar 

  • Cai, N., Chen, Y., Liu, G., Cen, G., Wang, H., & Chen, X. (2017). A vision-based character inspection system for tire mold. Assembly Automation, 37(2), 230–237. https://doi.org/10.1108/AA-07-2016-066.

    Article  Google Scholar 

  • Cen, G., Cai, N., Wu, J., Li, F., Wang, H., & Wang, G. (2020). Detonator coded character spotting based on convolutional neural networks. Signal, Image and Video Processing, 14(1), 67–75. https://doi.org/10.1007/s1176.

    Article  Google Scholar 

  • Cheng, Z., Bai, F., Xu, Y., Zheng, G., Pu, S., & Zhou, S. (2017). Focusing attention: Towards accurate text recognition in natural images. In Proceedings of the IEEE international conference on computer vision, 2017 (pp. 5076–5084). https://doi.org/10.1109/ICCV.2017.543.

  • Deng, D., Liu, H., Li, X., & Cai, D. (2018). Pixellink: Detecting scene text via instance segmentation. arXiv preprint arXiv:1801.01315.

  • Gao, H., Yi, M., Yu, J., Li, J., & Yu, X. (2019). Character segmentation-based coarse-fine approach for automobile dashboard detection. IEEE Transactions on Industrial Informatics, 15(10), 5413–5424. https://doi.org/10.1109/TII.2019.2905662.

    Article  Google Scholar 

  • Grafmüller, M., & Beyerer, J. (2013). Performance improvement of character recognition in industrial applications using prior knowledge for more reliable segmentation. Expert Systems with Applications, 40(17), 6955–6963. https://doi.org/10.1016/j.eswa.2013.06.004.

    Article  Google Scholar 

  • Graves, A., Fernández, S., Gomez, F., & Schmidhuber, J. (2006). Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks. In Proceedings of the 23rd International conference on machine learning, 2006 (pp. 369–376). ACM. https://doi.org/10.1145/1143844.1143891.

  • Han, W., Lu, C., Li, J., & Song, H. (2010). A novel label protuberant characters recognition method based on WPT and improved SVD. In 2010 8th world congress on intelligent control and automation, 2010 (pp. 6216–6220). IEEE. https://doi.org/10.1109/WCICA.2010.5554413.

  • He, K., Zhang, X., Ren, S., & Sun, J. (2015). Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, 2015 (pp. 1026–1034). https://doi.org/10.1109/ICCV.2015.123.

  • He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2016 (pp. 770–778). https://doi.org/10.1109/CVPR.2016.90.

  • Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural Computation, 9(8), 1735–1780. https://doi.org/10.1162/neco.1997.9.8.1735.

    Article  Google Scholar 

  • Hua, F., Tsang, Y. F., Wang, Y., Chan, S., Chua, H., & Sin, S. (2007). Performance study of ceramic microfiltration membrane for oily wastewater treatment. Chemical Engineering Journal, 128(2–3), 169–175. https://doi.org/10.1016/j.cej.2006.10.017.

    Article  Google Scholar 

  • Hubadillah, S. K., Othman, M. H. D., Matsuura, T., Ismail, A., Rahman, M. A., Harun, Z., et al. (2018). Fabrications and applications of low cost ceramic membrane from kaolin: A comprehensive review. Ceramics International, 44(5), 4538–4560. https://doi.org/10.1016/j.ceramint.2017.12.215.

    Article  Google Scholar 

  • Ikeda, O., & Duan, Y. (2008). Color photometric stereo for albedo and shape reconstruction. In 2008 IEEE workshop on applications of computer vision, 2008 (pp. 1–6). IEEE. https://doi.org/10.1109/WACV.2008.4544015.

  • Jian-Hai, C., Chang-Hou, L., & Chun-Yi, S. (2007). Product quality on-line inspecting for the pressed protuberant character on a metal tag. Image and Vision Computing, 25(8), 1255–1262. https://doi.org/10.1016/j.imavis.2006.07.025.

    Article  Google Scholar 

  • Kwon, O., Kim, H. G., Ham, M. J., Kim, W., Kim, G.-H., Cho, J.-H., et al. (2020). A deep neural network for classification of melt-pool images in metal additive manufacturing. Journal of Intelligent Manufacturing, 31(2), 375–386. https://doi.org/10.1007/s10845-018-1451-6.

    Article  Google Scholar 

  • Lee, S. J., Yun, J. P., Koo, G., & Kim, S. W. (2017). End-to-end recognition of slab identification numbers using a deep convolutional neural network. Knowledge-Based Systems, 132, 1–10. https://doi.org/10.1109/ICMLA.2016.0128.

    Article  Google Scholar 

  • Liao, M., Shi, B., & Bai, X. (2018). Textboxes ++: A single-shot oriented scene text detector. IEEE Transactions on Image Processing, 27(8), 3676–3690. https://doi.org/10.1109/TIP.2018.2825107.

    Article  Google Scholar 

  • Liao, M., Shi, B., Bai, X., Wang, X., & Liu, W. (2016). Textboxes: A fast text detector with a single deep neural network. arXiv preprint arXiv:1611.06779.

  • Lin, H., Li, B., Wang, X., Shu, Y., & Niu, S. (2019). Automated defect inspection of LED chip using deep convolutional neural network. Journal of Intelligent Manufacturing, 30(6), 2525–2534. https://doi.org/10.1007/s10845-018-1415-x.

    Article  Google Scholar 

  • Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.-Y., et al. (2016). Ssd: Single shot multibox detector. In European conference on computer vision, 2016 (pp. 21–37). Springer. https://doi.org/10.1007/978-3-319-46448-0_2.

  • Liu, Z., Luo, Z., Gong, P., & Guo, M. (2013). The research of character recognition algorithm for the automatic verification of digital instrument. In Proceedings of 2013 2nd international conference on measurement, information and control, 2013 (Vol. 1, pp. 177–181). IEEE. https://doi.org/10.1109/MIC.2013.6757941.

  • Luan, F., Paris, S., Shechtman, E., & Bala, K. (2017). Deep photo style transfer. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2017 (pp. 4990–4998). https://doi.org/10.1109/CVPR.2017.740.

  • Milletari, F., Navab, N., & Ahmadi, S.-A. (2016). V-net: Fully convolutional neural networks for volumetric medical image segmentation. In 2016 fourth international conference on 3D vision (3DV), 2016 (pp. 565–571). IEEE. https://doi.org/10.1109/3DV.2016.79.

  • Redmon, J., & Farhadi, A. (2018). Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767.

  • Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(6), 1137–1149. https://doi.org/10.1109/TPAMI.2016.2577031.

    Article  Google Scholar 

  • Ronneberger, O., Fischer, P., & Brox, T. (2015). U-Net: Convolutional networks for biomedical image segmentation. In International conference on medical image Computing and computer-assisted intervention, 2015 (pp. 234–241). Springer. https://doi.org/10.1007/978-3-319-24574-4_28.

  • Samaei, S. M., Gato-Trinidad, S., & Altaee, A. (2018). The application of pressure-driven ceramic membrane technology for the treatment of industrial wastewaters—A review. Separation and Purification Technology, 200, 198–220. https://doi.org/10.1016/j.seppur.2018.02.041.

    Article  Google Scholar 

  • Shi, B., Bai, X., & Belongie, S. (2017). Detecting oriented text in natural images by linking segments. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2017 (pp. 2550–2558). https://doi.org/10.1109/CVPR.2017.371.

  • Shi, B., Bai, X., & Yao, C. (2016). An end-to-end trainable neural network for image-based sequence recognition and its application to scene text recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(11), 2298–2304. https://doi.org/10.1109/TPAMI.2016.2646371.

    Article  Google Scholar 

  • Shi, B., Yu, N., Xu, J., & Zhao, Q. (2009). Extraction and recognition alphabetic and digital characters on industrial containers. In 2009 International conference on computational intelligence and security, 2009 (Vol. 1, pp. 340–343). IEEE. https://doi.org/10.1109/CIS.2009.197.

  • Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.

  • Tabernik, D., Šela, S., Skvarč, J., & Skočaj, D. (2020). Segmentation-based deep-learning approach for surface-defect detection. Journal of Intelligent Manufacturing, 31(3), 759–776. https://doi.org/10.1007/s10845-019-01476-x.

    Article  Google Scholar 

  • Tian, Z., Huang, W., He, T., He, P., & Qiao, Y. (2016). Detecting text in natural image with connectionist text proposal network. In European conference on computer vision, 2016 (pp. 56–72). Springer. https://doi.org/10.1007/978-3-319-46484-8_4.

  • Wolf, C., & Jolion, J.-M. (2006). Object count/area graphs for the evaluation of object detection and segmentation algorithms. International Journal of Document Analysis and Recognition (IJDAR), 8(4), 280–296. https://doi.org/10.1007/s10032-006-0014-0.

    Article  Google Scholar 

  • Woodham, R. J. (1980). Photometric method for determining surface orientation from multiple images. Optical Engineering, 19(1), 191139. https://doi.org/10.1117/12.7972479.

    Article  Google Scholar 

  • Wu, J., Cai, N., Li, F., Jiang, H., & Wang, H. (2020). Automatic detonator code recognition via deep neural network. Expert Systems with Applications, 145, 113121. https://doi.org/10.1016/j.eswa.2019.113121.

    Article  Google Scholar 

  • Wu, W., Liu, Z., Chen, M., Yang, X., & He, X. (2012). An automated vision system for container-code recognition. Expert Systems with Applications, 39(3), 2842–2855. https://doi.org/10.1016/j.eswa.2011.08.143.

    Article  Google Scholar 

  • Wu, R., Yang, S., Leng, D., Luo, Z., & Wang, Y. (2016). Random projected convolutional feature for scene text recognition. In 2016 15th international conference on frontiers in handwriting recognition (ICFHR), 2016 (pp. 132–137). IEEE. https://doi.org/10.1109/ICFHR.2016.0036.

  • Yu, J., Jiang, Y., Wang, Z., Cao, Z., & Huang, T. (2016). Unitbox: An advanced object detection network. In Proceedings of the 24th ACM international conference on multimedia, 2016 (pp. 516–520). ACM. https://doi.org/10.1145/2964284.2967274.

  • Yuanyuan, Z. (2016). Research on automatic visual inspection method for character on cartridge fuse based on template matching. In 2016 3rd international conference on information science and control engineering (ICISCE), 2016 (pp. 527–531). IEEE. https://doi.org/10.1109/ICISCE.2016.119.

  • Zhang, Y., Hong, H., Geng, H., & Lin, Z. (2007). Recognition algorithm for characters at ends of steel billet using features of character structures. In MIPPR 2007: Automatic target recognition and image analysis; and multispectral image acquisition, 2007 (Vol. 6786, pp. 67862L). International Society for Optics and Photonics. https://doi.org/10.1117/12.751704.

  • Zhang, Y., Xie, S., & Wei, S. (2013). Industrial character recognition based on grid feature and wavelet moment. In 2013 IEEE international conference on imaging systems and techniques (IST), 2013 (pp. 56–59). IEEE. https://doi.org/10.1109/IST.2013.6729662.

Download references

Acknowledgments

This work was in part supported by the National Natural Science Foundation of China (Nos. 91648108 and 51875108), the Key Laboratory Construction Projects in Guangdong (No. 2017B030314178), the Project of Jihua Laboratory (No. X190071UZ190), the Research Fund for Colleges and Universities in Huizhou, China and the Science and Technology Program of Guangzhou, China (No. 201802020010).

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Nian Cai or Han Wang.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, F., Cai, N., Deng, X. et al. Serial number inspection for ceramic membranes via an end-to-end photometric-induced convolutional neural network framework. J Intell Manuf 33, 1373–1392 (2022). https://doi.org/10.1007/s10845-020-01730-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10845-020-01730-7

Keywords

Navigation