Skip to main content
Log in

Deep multi-model fusion network based real object tactile understanding from haptic data

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

The tactile information of an object is one of the crucial features which define the impression of that object. This paper presents a novel multi-model fusion network for real object’s tactile understanding from haptic data. Furthermore, a low-cost 3D printed artificial finger-based tactile sensing system is designed for capturing haptic information in the form of acceleration profile, angular velocity, and normal force. Our proposed multi-model fusion network includes three different networks. First, we introduce a novel ensemble 2D convolutional neural network, namely SpectroNet, which captures the spatial features from the spectrogram of acceleration profile. Second, we design a 1-D convolutional neural network (CNN) with residual connection for extracting detailed spatial information from each segment of collected data. Third, we design bi-directional gated recurrent unit networks (BiGRU) to capture temporal dynamics. Moreover, the attention mechanism is utilized in all three proposed networks to assign weights to the features according to their contributions, which enhance the performance further. Finally, extensive experimental analysis is conducted on our dataset (i.e., 60 real objects, which cover both planner and non-planner surfaces) as well as the TUM surface material database. Empirical evaluations demonstrate that the proposed method significantly outperformed state-of-the-art methods in terms of accuracy, precision, recall and F1-score. Furthermore, we also found that the proposed multi-model fusion network substantially improves the performance compared to the single network.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Gao Y, Hendricks LA, Kuchenbecker KJ, Darrell T (2016) Deep learning for tactile understanding from visual and haptic data. In: Proceedings of IEEE international conference on robotics and automation 536–543

  2. Liu H, Sun F, Guo D, Fang B, Peng Z (2017) Structured Output-Associated dictionary learning for haptic understanding. IEEE Trans Syst Man Cybern Syst 47(7):1564–1574

    Article  Google Scholar 

  3. Degol J, Golparvar-Fard M, Hoiem D (2016) Geometry-Informed Material Recognition. In: Proceedings of IEEE conference on computer vision and pattern recognition (CVPR) 1554–1562

  4. Bell S, Upchurch P, Snavely N, Bala K (2015) Material recognition in the wild with the Materials in Context Database. In: Proceedings of IEEE conference on computer vision and pattern recognition (CVPR) 3479–3487

  5. Zhang Y, Ozay M, Liu X, Okatani T (2016) Integrating deep features for material recognition. In: Proceedings of 23rd international conference on pattern recognition (ICPR) 3697–3702

  6. Jiang X, Du J, Sun B, Feng X (2018) Deep dilated convolutional network for material recognition. In: Proceedings of eighth international conference on image processing theory, tools and applications (IPTA) 1–6

  7. Abderrahmane Z, Ganesh G, Crosnier A, Cherubini A (2018) Haptic Zero-Shot learning: Recognition of objects never touched before. Rob Auton Syst 105:11–25

    Article  Google Scholar 

  8. Chu V, McMahon I, Riano L, McDonald CG, He Q, Perez-Tejada JM, Arrigo M, Darrell T, Kuchenbecker KJ (2015) Robotic learning of haptic adjectives through physical interaction. Rob Auton Syst 63(3):279–292

    Article  Google Scholar 

  9. Liu H, Qin J, Sun F, Guo D (2017) Extreme kernel sparse learning for tactile object recognition. IEEE Trans Cybern 47(12):4509–4520

    Article  Google Scholar 

  10. Liu H, Sun F, Fang B, Guo D (2020) Cross-Modal Zero-Shot-Learning For tactile object recognition. IEEE Trans Syst Man Cybern Syst 50(7):2466–2474

    Article  Google Scholar 

  11. Abderrahmane Z, Ganesh G, Crosnier A, Cherubini A (2018) Visuo-tactile recognition of daily-Life objects never seen or touched before. In: Proceedings of 15th international conference on control. Robotics and vision (ICARCV), Automation, pp 1765–1770

  12. BioTac Product Manual [Online]. Available: https://www.syntouchinc.com/wp-content/uploads/2018/08/BioTac-Manual-V.21.pdf. Accessed 27 Apr 2021

  13. Zheng H, Fang L, Ji M, Strese M, Özer Y, Steinbach E (2016) Deep learning for surface material classification using haptic and visual information. IEEE Trans Multimed 18(12):2407–2416

    Article  Google Scholar 

  14. Strese M, Schuwerk C, Iepure A, Steinbach E (2017) Multimodal Feature-Based surface material classification. IEEE Trans Haptics 10(2):226–239

    Article  Google Scholar 

  15. Strese M, Boeck Y, Steinbach E (2017) Content-based surface material retrieval. In: Proceedings of IEEE world haptics conference (WHC) 352–357

  16. Liu H, Sun F, Fang B, Lu S (2018) Multimodal measurements fusion for surface material categorization. IEEE Trans Instrum Meas 67(2):246–256

    Article  Google Scholar 

  17. Tsuji S, Kohama T (2019) Using a convolutional neural network to construct a pen-type tactile sensor system for roughness recognition. Sensors and Actuators A:, Physical 291:7–12

    Article  Google Scholar 

  18. Kim MG, Pan SB (2019) Deep learning based on 1-D ensemble networks using ECG for Real-Time user recognition. IEEE Trans Industr Inform 15(10):5656–5663

    Article  Google Scholar 

  19. Heo S, Nam K, Loy-Benitez J, Yoo C (2021) Data-Driven Hybrid model for forecasting wastewater influent loads based on multimodal and ensemble deep learning. IEEE Trans Industr Inform 17(10):6925–6934

    Article  Google Scholar 

  20. Rai HM, Chatterjee K (2021) Hybrid CNN-LSTM deep learning model and ensemble technique for automatic detection of myocardial infarction using big ECG data. Appl Intell. https://doi.org/10.1007/s10489-021-02696-6

  21. Huang R, Li J, Li W, Cui L (2020) Deep ensemble capsule network for intelligent compound fault diagnosis using multisensory data. IEEE Trans Instrum Meas 69(5):2304–2314

    Article  Google Scholar 

  22. Sun H, Chen M, Weng J, Liu Z, Geng G (2021) Anomaly detection for In-Vehicle network using CNN-LSTM with attention mechanism. IEEE Trans Veh Technol 70(10):10880–10893

    Article  Google Scholar 

  23. Culbertson H, Unwin J, Kuchenbecker KJ (2014) Modeling and rendering realistic textures from unconstrained Tool-Surface interactions. IEEE Trans Haptics 7(3):381–393

    Article  Google Scholar 

  24. Abdulali A, Atadjanov IR, Jeon S (2020) Visually guided acquisition of contact dynamics and case study in Data-Driven haptic texture modeling. IEEE Trans Haptics 13(3):611–627

    Article  Google Scholar 

  25. Zhengkun Y, Yilei Z (2017) Recognizing tactile surface roughness with a biomimetic fingertip: a soft neuromorphic approach. Neurocomputing 244:102–111

    Article  Google Scholar 

  26. Tanaka Y, Hasegawa T, Hashimoto M, Igarashi T (2019) Artificial Fingers Wearing Skin Vibration Sensor for Evaluating Tactile Sensations. In: Proceedings of IEEE world haptics conference (WHC) 377–382

  27. Strese M, Brudermueller L, Kirsch J, Steinbach E (2020) Haptic material analysis and classification inspired by human exploratory procedures. IEEE Trans Haptics 13(2):404– 424

    Article  Google Scholar 

  28. Fulop SA, Fitz K (2006) Algorithms for computing the time-corrected instantaneous frequency (reassigned) spectrogram, with applications. Journal of the Acoustical Society of America 119:360–371

    Article  Google Scholar 

  29. MPU-9250 Product Specification. [Online]. Available: https://invensense.tdk.com/wp-content/uploads/2015/02/PS-MPU-9250A-01-v1.1.pdf. Accessed 27 Apr 2021

  30. Bianco S, Napoletano P (2019) Biometric recognition using multimodal physiological signals. IEEE Access 7:83581–83588

    Article  Google Scholar 

  31. Gao H, Kong D, Lu M, Bai X, Yang J (2018) Attention Convolutional Neural Network for Advertiser-level Click-through Rate Forecasting. In: Proceedings of the world wide web conference 1855–1864

  32. Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural computation 9 (8):1735–1780

    Article  Google Scholar 

  33. Li D, Fu Z, Xu J (2021) Stacked-autoencoder-based model for COVID-19 diagnosis on CT images. Appl Intell 51:2805–2817

    Article  Google Scholar 

  34. Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R (2014) Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res 15(1):1929–1958

    MathSciNet  MATH  Google Scholar 

  35. Sokolova M, Lapalme G (2009) A systematic analysis of performance measures for classification tasks. Inf Process Manag 45(4):427–437

    Article  Google Scholar 

  36. Krizhevsky A, Sutskever I, Hinton G (2012) ImageNet classification with deep convolutional neural networks. In: Proceedings of the 25th international conference on neural information processing systems 1097–1105

  37. Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition. In: Proceedings of the international conference on learning representations (ICLR) arXiv preprint arXiv:https://axiv.org/abs/1409.1556

  38. He K, Zhang X, Ren S, Sun J (2016) Deep Residual Learning for Image Recognition. In: Proceeding of the IEEE conference on computer vision and pattern recognition (CVPR) 770–778

Download references

Acknowledgements

This research was supported by the Preventive Safety Service Technology Development Program funded by the Korean Ministry of Interior and Safety under Grant 2019-MOIS34-001.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Seokhee Jeon.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Joolee, J.B., Uddin, M.A. & Jeon, S. Deep multi-model fusion network based real object tactile understanding from haptic data. Appl Intell 52, 16605–16620 (2022). https://doi.org/10.1007/s10489-022-03181-4

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-022-03181-4

Keywords