Skip to main content
Log in

Knowledge-based hybrid connectionist models for morphologic reasoning

  • Original Paper
  • Published:
Machine Vision and Applications Aims and scope Submit manuscript

Abstract

Texture morphology perception is essential feedback for robots in tactile-related tasks (such as robot’s electrical palpation, manipulation, or recognition of objects in complex, wet, and dark work conditions). However, it is tough to quantify morphologic information and define morphologic feature. For this reason, it is difficult to use prior tactile experience in detection, which results in large dataset requirements, time costs, and frequent model retraining for new targets. This study introduced a hybrid connectionist symbolic model (HCSM) that integrates prior symbolic human experience and the end-to-end neural network. HCSM requires smaller datasets owing to using a symbolic model based on human knowledge. Moreover, HCSM improves the transferability of detection and interpretation of recognition results. The neural network has the advantage of easy training. The HCSM combines the merits of both connectionist and symbolic models. We have implemented tactile morphologic detection of basic geometry textures (such as bulges and ridges) using the HCSM method. The trained model can be transferred to detect gaps and holes by manual adjustment of the symbolic definition, without model retraining. Similarly, other new morphology can be detected by only modifying the symbolic model. We have compared the recognition performance of the proposed model with that of the traditional classification models, such as LeNet, VGG16, ResNet, XGBoost, and DenseNet. The proposed HCSM model has achieved the best recognition accuracy. Besides, compared with classic classification models, our method is less likely to misrecognize one target as a completely different counterpart, providing a guarantee for generalization boundaries of recognition to a certain degree.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Data Availability

All data, models generated or used during the study are available from the corresponding author by request.

Code Availability

All code generated or used during the study are available from the corresponding author by request.

References

  1. Chen, M., Li, K., Cheng, G., He, K., Zhang, D., Li, W., Feng, Y., Wei, L., Li, W., Li, W., et al.: Touchpoint-tailored ultrasensitive piezoresistive pressure sensors with a broad dynamic response range and low detection limit. ACS Appl. Mater. Interfaces 11(2), 2551–2558 (2018). https://doi.org/10.1021/acsami.8b20284

    Article  Google Scholar 

  2. Park, J., Kim, M., Lee, Y., Lee, H.S., Ko, H.: Fingertip skin-inspired microstructured ferroelectric skins discriminate static/dynamic pressure and temperature stimuli. Sci. Adv. 1(9), e1500661 (2015). https://doi.org/10.1126/sciadv.1500661

    Article  Google Scholar 

  3. Umer, S., Dhara, B.C., Chanda, B.: Iris recognition using multiscale morphologic features. Pattern Recogn. Lett. 65, 67–74 (2015). https://doi.org/10.1016/j.patrec.2015.07.008

    Article  Google Scholar 

  4. Shih, F.Y.: Object representation and recognition using mathematical morphology model. J. Syst. Integr. 1(2), 235–256 (1991)

    Article  Google Scholar 

  5. Kim, K., Sim, M., Lim, S.H., Kim, D., Lee, D., Shin, K., Moon, C., Choi, J., Jang, J.E.: Tactile avatar: tactile sensing system mimicking human tactile cognition. Adv. Sci. 8(7), 2002362 (2021). https://doi.org/10.1002/advs.202002362

    Article  Google Scholar 

  6. Romano, J.M., Hsiao, K., Niemeyer, G., Chitta, S., Kuchenbecker, K.J.: Human-inspired robotic grasp control with tactile sensing. IEEE Trans. Rob. 27(6), 1067–1079 (2011). https://doi.org/10.1109/TRO.2011.2162271

    Article  Google Scholar 

  7. Sundaram, S., Kellnhofer, P., Li, Y., Zhu, J.Y., Torralba, A., Matusik, W.: Learning the signatures of the human grasp using a scalable tactile glove. Nature 569(7758), 698–702 (2019). https://doi.org/10.1038/s41586-019-1234-z

    Article  Google Scholar 

  8. Dargahi, J., Najarian, S.: Human tactile perception as a standard for artificial tactile sensing-a review. Int. J. Med. Rob. Comput. Assisted Surg. 1(1), 23–35 (2004). https://doi.org/10.1002/rcs.3

    Article  Google Scholar 

  9. Tanaka, Y., Horita, Y., Sano, A., Fujimoto, H.: Tactile sensing utilizing human tactile perception. In: 2011 IEEE World Haptics Conference, pp. 621–626. IEEE (2011). https://doi.org/10.1109/WHC.2011.5945557

  10. Piacenza, P., Sherman, S., Ciocarlie, M.: Data-driven super-resolution on a tactile dome. IEEE Rob. Autom. Lett. 3(3), 1434–1441 (2018). https://doi.org/10.1109/LRA.2018.2800081

    Article  Google Scholar 

  11. Molchanov, A., Kroemer, O., Su, Z., Sukhatme, G. S.: Contact localization on grasped objects using tactile sensing. In: 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 216–222 (2016). https://doi.org/10.1109/IROS.2016.7759058

  12. Garcia-Garcia, A., Zapata-Impata, B.S., Orts-Escolano, S., Gil, P., Garcia-Rodriguez, J.: Tactilegcn: A graph convolutional network for predicting grasp stability with tactile sensors. In: 2019 International Joint Conference on Neural Networks (IJCNN), pp. 1–8 (2019). https://doi.org/10.1109/IJCNN.2019.8851984

  13. Zhou, S.M., Gan, J.Q.: Low-level interpretability and high-level interpretability: a unified view of data-driven interpretable fuzzy system modelling. Fuzzy Sets Syst. 159(23), 3091–3131 (2008). https://doi.org/10.1016/j.fss.2008.05.016

    Article  MathSciNet  Google Scholar 

  14. Chakraborty, D., Başağaoğlu, H., Winterle, J.: Interpretable vs. noninterpretable machine learning models for data-driven hydro-climatological process modeling. Expert Syst. Appl. 170, 114498 (2021). https://doi.org/10.1016/j.eswa.2020.114498

    Article  Google Scholar 

  15. Fan, C., Xiao, F., Yan, C., Liu, C., Li, Z., Wang, J.: A novel methodology to explain and evaluate data-driven building energy performance models based on interpretable machine learning. Appl. Energy 235, 1551–1560 (2019). https://doi.org/10.1016/j.apenergy.2018.11.081

    Article  Google Scholar 

  16. Thomas, G., Chien, M., Tamar, A., Ojea, J.A., Abbeel, P.: Learning robotic assembly from cad. In: 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 3524–3531. IEEE (2018).https://doi.org/10.1109/ICRA.2018.8460696

  17. Alashkar, T., Jiang, S., Wang, S., Fu, Y.: Examples-rules guided deep neural network for makeup recommendation. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 31, pp. 941–947 (2017). https://aaai.org/ocs/index.php/AAAI/AAAI17/paper/view/14773

  18. Yao, L., Mao, C., Luo, Y.: Clinical text classification with rule-based features and knowledge-guided convolutional neural networks. BMC Med. Inf. Decis. Making 19(3), 71 (2019). https://doi.org/10.1186/s12911-019-0781-4

    Article  Google Scholar 

  19. Yang, F., Liu, N., Du, M., Zhou, K., Ji, S., Hu, X.: Deep neural networks with knowledge instillation. In: Proceedings of the 2020 SIAM International Conference on Data Mining, pp. 370–378 (2020). https://doi.org/10.1137/1.9781611976236.42

  20. Hu, Z., Yang, Z., Salakhutdinov, R., Xing, E.: Deep neural networks with massive learned knowledge. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 1670–1679 (2016)

  21. Rutishauser, U., Walther, D., Koch, C., Perona, P.: Is bottom-up attention useful for object recognition? In: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), pp. II–II (2004). https://doi.org/10.1109/CVPR.2004.1315142

  22. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998). https://doi.org/10.1109/5.726791

    Article  Google Scholar 

  23. Al-Jawfi, R.: Handwriting Arabic character recognition LeNet using neural network. Int. Arab J. Inf. Technol. 6(3), 304–309 (2009)

    Google Scholar 

  24. Wei, G., Li, G., Zhao, J., He, A.: Development of a LeNet-5 gas identification CNN structure for electronic noses. Sensors 19(1), 217 (2019). https://doi.org/10.3390/s19010217

    Article  Google Scholar 

  25. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. Adv. Neural. Inf. Process. Syst. 25, 1097–1105 (2012). https://doi.org/10.1145/3065386

    Article  Google Scholar 

  26. Khare, S.K., Bajaj, V.: Time-frequency representation and convolutional neural network-based emotion recognition. IEEE Trans. Neural Netw. Learn. Syst. (2020). https://doi.org/10.1109/TNNLS.2020.3008938

    Article  Google Scholar 

  27. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014) https://arxiv.org/abs/1409.1556v2

  28. Ullo, S.L., Khare, S.K., Bajaj, V., Sinha, G.: Hybrid computerized method for environmental sound classification. IEEE Access 8, 124055–124065 (2020). https://doi.org/10.1109/ACCESS.2020.3006082

    Article  Google Scholar 

  29. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/CVPR.2016.90

  30. Wu, Z., Shen, C., Van Den Hengel, A.: Wider or deeper: revisiting the resnet model for visual recognition. Pattern Recogn. 90, 119–133 (2019). https://doi.org/10.1016/j.patcog.2019.01.006

    Article  Google Scholar 

  31. Chen, T., Guestrin, C.: Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd ACM Sigkdd International Conference on Knowledge Discovery and Data Mining, pp. 785–794 (2016). https://doi.org/10.1145/2939672.2939785

  32. Iandola, F., Moskewicz, M., Karayev, S., Girshick, R., Darrell, T., Keutzer, K.: Densenet: implementing efficient convnet descriptor pyramids. arXiv preprint arXiv:1404.1869 (2014). https://arxiv.org/abs/1404.1869

  33. Mazid, A.M., Russell, R.A.: A robotic opto-tactile sensor for assessing object surface texture. In: 2006 IEEE Conference on Robotics, Automation and Mechatronics, pp. 1–5 (2006). https://doi.org/10.1109/RAMECH.2006.252725

  34. Chun, S., Hwang, I., Son, W., Chang, J.-H., Park, W.: Recognition, classification, and prediction of the tactile sense. Nanoscale 10(22), 10545–10553 (2018). https://doi.org/10.1039/C8NR00595H

    Article  Google Scholar 

  35. Jamali, N., Byrnes-Preston, P., Salleh, R., Sammut, C.: Texture recognition by tactile sensing. In: Australasian Conference on Robotics and Automation (ACRA) 2009

  36. Jamali, N., Sammut, C.: Material classification by tactile sensing using surface textures. In: 2010 IEEE International Conference on Robotics and Automation, pp. 2336–2341 (2010). https://doi.org/10.1109/ROBOT.2010.5509675

  37. Jamali, N., Sammut, C.: Majority voting: material classification by tactile sensing using surface texture. IEEE Trans. Rob. 27(3), 508–521 (2011). https://doi.org/10.1109/ROBOT.2010.5509675

    Article  Google Scholar 

  38. Taddeucci, D., Laschi, C., Lazzarini, R., Magni, R., Dario, P., Starita, A.: An approach to integrated tactile perception. In: Proceedings of International Conference on Robotics and Automation, pp. 3100–3105 (1997). https://doi.org/10.1109/ROBOT.1997.606759

  39. Kaboli, M., Walker, R., Cheng, G.: In-hand object recognition via texture properties with robotic hands, artificial skin, and novel tactile descriptors. In: 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids), pp. 1155–1160 (2015). https://doi.org/10.1109/HUMANOIDS.2015.7363508

  40. Kaboli, M., Cheng, G.: Novel tactile descriptors and a tactile transfer learning technique for active in-hand object recognition via texture properties. In: IEE-RAS International Conference on Humanoid Robots-Workshop Tactile sensing for manipulation: new progress and challenges 2016

  41. Kaboli, M., Cheng, G.: Robust tactile descriptors for discriminating objects from textural properties via artificial robotic skin. IEEE Trans. Rob. 34(4), 985–1003 (2018). https://doi.org/10.1109/TRO.2018.2830364

    Article  Google Scholar 

  42. Liu, H., Yu, Y., Sun, F., Gu, J.: Visual-tactile fusion for object recognition. IEEE Trans. Autom. Sci. Eng. 14(2), 996–1008 (2016). https://doi.org/10.1109/TASE.2016.2549552

    Article  Google Scholar 

  43. Song, A., Han, Y., Hu, H., Li, J.: A novel texture sensor for fabric texture measurement and classification. IEEE Trans. Instrum. Meas. 63(7), 1739–1747 (2013). https://doi.org/10.1109/TIM.2013.2293812

    Article  Google Scholar 

  44. Cretu, A.-M., De Oliveira, T.E.A., Da Fonseca, V.P., Tawbe, B., Petriu, E.M., Groza, V.Z.: Computational intelligence and mechatronics solutions for robotic tactile object recognition. In: 2015 IEEE 9th International Symposium on Intelligent Signal Processing (WISP) Proceedings, pp. 1–6 (2015). https://doi.org/10.1109/WISP.2015.7139165

  45. Fang, B., Yang, C., Sun, F., Liu, H.: Visual-tactile fusion for robotic stable grasping. In: Industrial Robotics-New Paradigms. IntechOpen (2020)

  46. Yuan, W., Mo, Y., Wang, S., Adelson, E.H.: Active clothing material perception using tactile sensing and deep learning. In: 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 4842–4849 (2018). https://doi.org/10.1109/ICRA.2018.8461164

  47. Rasouli, M., Chen, Y., Basu, A., Kukreja, S.L., Thakor, N.V.: An extreme learning machine-based neuromorphic tactile sensing system for texture recognition. IEEE Trans. Biomed. Circuits Syst. 12(2), 313–325 (2018). https://doi.org/10.1109/TBCAS.2018.2805721

    Article  Google Scholar 

  48. Luo, S., Yuan, W., Adelson, E., Cohn, A.G., Fuentes, R.: Vitac: Feature sharing between vision and tactile sensing for cloth texture recognition. In: 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 2722–2727 (2018). https://doi.org/10.1109/ICRA.2018.8460494

  49. Ward-Cherrier, B., Pestell, N., Lepora, N.F.: Neurotac: A neuromorphic optical tactile sensor applied to texture recognition. In: 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 2654–2660 (2020). https://doi.org/10.1109/ICRA40945.2020.9197046

  50. Abd, M.A., Paul, R., Aravelli, A., Bai, O., Lagos, L., Lin, M., Engeberg, E.D.: Hierarchical tactile sensation integration from prosthetic fingertips enables multi-texture surface recognition. Sensors 21(13), 4324 (2021). https://doi.org/10.3390/s21134324

    Article  Google Scholar 

  51. Sankar, S., Balamurugan, D., Brown, A., Ding, K., Xu, X., Low, J.H., Yeow, C.H., Thakor, N.: Texture discrimination with a soft biomimetic finger using a flexible neuromorphic tactile sensor array that provides sensory feedback. Soft Rob. 8(5), 577–587 (2021). https://doi.org/10.1089/soro.2020.0016

    Article  Google Scholar 

  52. Sundaram, S., Kellnhofer, P., Li, Y., Zhu, J.-Y., Torralba, A., Matusik, W.: Learning the signatures of the human grasp using a scalable tactile glove. Nature 569(7758), 698–702 (2019). https://doi.org/10.1038/s41586-019-1234-z

    Article  Google Scholar 

  53. Wang, Y., Chen, J., Mei, D.: Recognition of surface texture with wearable tactile sensor array: a pilot Study. Sens. Actuators A 307, 111972 (2020). https://doi.org/10.1016/j.sna.2020.111972

    Article  Google Scholar 

  54. Garcia-Garcia, A., Zapata-Impata, B.S., Orts-Escolano, S., Gil, P., Garcia-Rodriguez, J.: Tactilegcn: A graph convolutional network for predicting grasp stability with tactile sensors. In: 2019 International Joint Conference on Neural Networks (IJCNN), pp. 1–8 (2019). https://doi.org/10.1109/IJCNN.2019.8851984

  55. Gu, F., Sng, W., Taunyazov, T., Soh, H.: TactileSGNet: a spiking graph neural network for event-based tactile object recognition. In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 9876–9882 (2020). https://doi.org/10.1109/IROS45743.2020.9341421

  56. Yan, Y., Hu, Z., Shen, Y., Pan, J.: Surface texture recognition by deep learning-enhanced tactile sensing. Adv. Intell. Syst. 21, 76 (2021). https://doi.org/10.1002/aisy.202100076

    Article  Google Scholar 

  57. Guo, Z., Mo, L., Ding, Y., Zhang, Q., Meng, X., Wu, Z., Chen, Y., Cao, M., Wang, W., Li, L.: Printed and flexible capacitive pressure sensor with carbon nanotubes based composite dielectric layer. Micromachines 10(11), 715 (2019). https://doi.org/10.3390/mi10110715

    Article  Google Scholar 

  58. Khan, S., Tinku, S., Lorenzelli, L., Dahiya, R.S.: Flexible tactile sensors using screen-printed P (VDF-TrFE) and MWCNT/PDMS composites. IEEE Sens. J. 15(6), 3146–3155 (2014). https://doi.org/10.1109/JSEN.2014.2368989

    Article  Google Scholar 

  59. Chortos, A., Liu, J., Bao, Z.: Pursuing prosthetic electronic skin. Nat. Mater. 15(9), 937–950 (2016). https://doi.org/10.1038/nmat4671

    Article  Google Scholar 

  60. He, K., Zhao, L., Yu, P., Liu, L.: A contact force measure sensor based on resistance-array-type sensor. In: 2017 32nd Youth Academic Annual Conference of Chinese Association of Automation (YAC), pp. 760–763. IEEE (2017)

  61. Sivasankari, M., Anandan, R.: Regression analysis on sea surface temperature. In: Intelligent Computing and Innovation on Data Science, pp. 595–601 (2020). https://doi.org/10.1007/978-981-15-3284-9_68

  62. Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K. Q.: Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017)

  63. Katz, G., Barrett, C., Dill, D. L., Julian, K., Kochenderfer, M. J.: Reluplex: an efficient SMT solver for verifying deep neural networks. In International Conference on Computer Aided Verification, pp. 97–117 (2017). https://doi.org/10.1007/978-3-319-63387-9_5

  64. Dunne, R.A., Campbell, N.A.: On the pairing of the softmax activation and cross-entropy penalty functions and the derivation of the softmax activation function. In: Proceedings of 8th Austrilan Conference on the Neural Networks, vol. 181, pp. 185 (1997)

  65. Choi, K., Fazekas, G., Sandler, M., Cho, K.: Convolutional recurrent neural networks for music classification. In: 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2392–2396 (2017). https://doi.org/10.1109/ICASSP.2017.7952585

  66. Ioffe, S., Szegedy, C.: Batch normalization: Accelerating deep network training by reducing internal covariate shift. In: International Conference on Machine Learning, pp. 448–456 (2015)

Download references

Funding

This work was supported in part by the Major Scientific and Technological Innovation Project of Shandong Province under Grant Nos.2019JZZY010128, in part by the National Natural Science of China under Grant Nos. 61821005 and Grant 91748212, and in part by the Sichuan Science and Technology Program under Grant 2020YESY0012.

Author information

Authors and Affiliations

Authors

Contributions

KH, LL, and NX conceived and designed the study. KH and WW designed the algorithm. KH and GL performed the experiments. PY and GL debugged the collecting system. KH, FT, and WW wrote the paper. All authors read and approved the manuscript.

Corresponding author

Correspondence to Kai He.

Ethics declarations

Conflict of interest

No conflict of interest exists in the submission of this manuscript, and the manuscript is approved by all authors for publication.

Consent to participate

Not applicable.

Consent to publish

Not applicable.

Ethics approval

Not applicable.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

He, K., Wang, W., Li, G. et al. Knowledge-based hybrid connectionist models for morphologic reasoning. Machine Vision and Applications 34, 29 (2023). https://doi.org/10.1007/s00138-023-01374-6

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00138-023-01374-6

Keywords

Mathematics Subject Classification

Navigation