Skip to main content

Advertisement

Log in

Active object perception using Bayesian classifiers and haptic exploration

  • Published:
Autonomous Robots Aims and scope Submit manuscript

Abstract

To recognise objects using only tactile sensing, humans employ various haptic exploratory procedures (EPs). Because the time, effort, and information acquisition costs of different EPs vary, choosing the best EP for accurate and efficient perception is usually based on prior knowledge or experience, also known as active exploration. An active EP selection algorithm based on a Gaussian mixture modal and Bayesian classifier has been developed to empower robots with similar intelligence. To choose the best EP for the next perception iteration, the information gain and total time cost of all actions required to identify the object are both considered. Six EPs were realised using a designed robotic arm platform, allowing eight features representing the object’s surface and geometric properties to be extracted. To evaluate the algorithm, offline data and real-world experiments were used, with the random method as a comparison. According to the results, the active method outperformed the random method with higher accuracy and in significantly less time. It had an average of weighted information gain of 132.6 and a time cost ratio (spent/total time) of only 0.3.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  • Back, J., Bimbo, J., Noh, Y., Seneviratne, L. D., Althoefer, K., & Liu, H. (2014). Control a contact sensing finger for surface haptic exploration. In 2014 IEEE international conference on robotics and automation, ICRA (pp. 2736–2741).

  • Ciotti, S., Sun, T., Battaglia, E., Bicchi, A., Liu, H., & Bianchi, M. (2019). Soft tactile sensing: retrieving force, torque and contact point information from deformable surfaces. In 2019 IEEE International Conference on Robotics and Automation, ICRA.

  • Dahiya, R. S., Metta, G., Valle, M., & Sandini, G. (2010). Tactile sensing-from humans to humanoids. IEEE Transactions on Robotics, 26(1), 1–20.

    Article  Google Scholar 

  • Drewing, K., & Ernst, M. O. (2006). Integration of force and position cues for shape perception through active touch. Brain Research, 1078(1), 92–100.

    Article  Google Scholar 

  • Driess, D., Englert, P., & Toussaint, M. (2017). Active learning with query paths for tactile object shape exploration. In 2017 IEEE/RSJ international conference on intelligent robots and systems (IROS) (pp. 65–72).

  • Fishel, J. A., & Loeb, G. E. (2012). Bayesian exploration for intelligent identification of textures. Frontiers in Neurorobotics, 6, 4–4.

    Article  Google Scholar 

  • Hamidreza Kasaei, S., Oliveira, M., Lim, G. H., Seabra Lopes, L., & Tomé, A. M. (2018). Towards lifelong assistive robotics: A tight coupling between object perception and manipulation. Neurocomputing, 291, 151–166. https://doi.org/10.1016/j.neucom.2018.02.066

    Article  Google Scholar 

  • Hopkins, W. (2000). Measures of reliability in sports medicine and science. Sports Medicine, 30(1), 1–15.

    Article  Google Scholar 

  • Kaboli, M., Feng, D., & Cheng, G. (2017). Active tactile transfer learning for object discrimination in an unstructured environment using multimodal robotic skin. International Journal of Humanoid Robotics, 15(1), 1850001.

    Article  Google Scholar 

  • Klatzky, R. L., & Lederman, S. (1990). Intelligent exploration by the human hand. Dextrous robot hands (pp. 66–81).

  • Lederman, S. J., & Klatzky, R. (1993). Extracting object properties through haptic exploration. Acta Psychologica, 84, 29–40.

    Article  Google Scholar 

  • Lederman, S. J., & Klatzky, R. L. (2009). Haptic perception: A tutorial. Attention, Perception and Psychophysics, 71, 1439–1459.

    Article  Google Scholar 

  • Lepora, N. F. (2016). Biomimetic active touch with fingertips and whiskers. IEEE Transactions on Haptics, 9(2), 170–183.

    Article  Google Scholar 

  • Lepora, N. F., Martinez-Hernandez, U., & Prescott, T. J. (2013). Active Bayesian perception for simultaneous object localization and identification. In Robotics: Science and systems 2013 (Vol. 9). https://academic.microsoft.com/paper/1562619787

  • Lepora, N. F., Aquilina, K., & Cramphorn, L. P. (2017). Exploratory tactile servoing with active touch. IEEE Robotics and Automation Letters, 2(2), 1156–1163.

    Article  Google Scholar 

  • Liu, H., Nguyen, K. C., Perdereau, V., Bimbo, J., Back, J., Godden, M., Seneviratne, L. D., & Althoefer, K. (2015). Finger contact sensing and the application in dexterous hand manipulation. Autonomous Robots, 39, 25–41.

    Article  Google Scholar 

  • Liu, Z., Kamogawa, H., & Ota, J. (2011). Fast and automatic robotic grasping of unknown objects. In 2011 IEEE international conference on robotics and biomimetics. IEEE (pp. 1096–1101).

  • Loeb, G. E., & Fishel, J. A. (2014). Bayesian action &perception: Representing the world in the brain. Frontiers in Neuroscience, 8, 341–341.

    Article  Google Scholar 

  • Luo, S., Bimbo, J., Dahiya, R., & Liu, H. (2017). Robotic tactile perception of object properties: A review. Mechatronics, 48, 54–67.

    Article  Google Scholar 

  • Martinez-Hernandez, U., Lepora, N., & Prescott, T. (2015). Active control for object perception and exploration with a robotic hand. 415–428. https://doi.org/10.1007/978-3-319-22979-9_42

  • Martinez-Hernandez, U., Dodd, T. J., Evans, M. H., Prescott, T. J., & Lepora, N. F. (2017). Active sensorimotor control for tactile exploration. Robotics and Autonomous Systems, 87, 15–27.

    Article  Google Scholar 

  • Martinez-Hernandez, U., Dodd, T. J., & Prescott, T. J. (2018). Feeling the shape: Active exploration behaviors for object recognition with a robotic hand. Systems Man and Cybernetics, 48(12), 2339–2348.

    Google Scholar 

  • Martins, R., Ferreira, J. F., Castelo-Branco, M., & Dias, J. (2017). Integration of touch attention mechanisms to improve the robotic haptic exploration of surfaces. Neurocomputing, 222, 204–216.

    Article  Google Scholar 

  • Matsubara, T., & Shibata, K. (2017). Active tactile exploration with uncertainty and travel cost for fast shape estimation of unknown objects. Robotics and Autonomous Systems, 91, 314–326.

    Article  Google Scholar 

  • Oddo, C. M., Controzzi, M., Beccai, L., Cipriani, C., & Carrozza, M. C. (2011). Roughness encoding for discrimination of surfaces in artificial active-touch. IEEE Transactions on Robotics, 27(3), 522–533.

    Article  Google Scholar 

  • Okamura, A. M., Smaby, N., & Cutkosky, M. R. (2000). An overview of dexterous manipulation. In: Proceedings 2000 ICRA. Millennium Conference. In IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065) (Vol. 1, pp. 255–262).

  • Ottenhaus, S., Kaul, L., Vahrenkamp, N., & Asfour, T. (2018). Active tactile exploration based on cost-aware information gain maximization. International Journal of Humanoid Robotics, 15(1), 1850015.

    Article  Google Scholar 

  • O’Doherty, J. E., Lebedev, M. A., Ifft, P. J., Zhuang, K. Z., Shokur, S., Bleuler, H., & Nicolelis, M. A. L. (2011). Active tactile exploration using a brain–machine–brain interface. Nature, 479(7372), 228–231.

    Article  Google Scholar 

  • Pacchierotti, C., Prattichizzo, D., & Kuchenbecker, K. J. (2016). Cutaneous feedback of fingertip deformation and vibration for palpation in robotic surgery. IEEE Transactions on Biomedical Engineering, 63(2), 278–287.

    Article  Google Scholar 

  • Pezzementi, Z., Plaku, E., Reyda, C., & Hager, G. D. (2011). Tactile object recognition from appearance information. IEEE Transactions on Robotics, 27(3), 473–487. https://doi.org/10.1109/TRO.2011.2125350

    Article  Google Scholar 

  • Prescott, T. J., Diamond, M. E., & Wing, A. M. (2011). Active touch sensing. Philosophical Transactions of the Royal Society B, 366(1581), 2989–2995.

    Article  Google Scholar 

  • Reed, G. F., Lynn, F., & Meade, B. D. (2002). Use of coefficient of variation in assessing variability of quantitative assays. Clinical and Vaccine Immunology, 9(6), 1235–1239.

    Article  Google Scholar 

  • Sinapov, J., & Stoytchev, A. (2010). The boosting effect of exploratory behaviors. In AAAI’10 proceedings of the twenty-fourth AAAI conference on artificial intelligence (pp. 1613–1618).

  • Smith, A. M., Chapman, C. E., Deslandes, M., Langlais, J. S., & Thibodeau, M. P. (2002). Role of friction and tangential force variation in the subjective scaling of tactile roughness. Experimental Brain Research, 144(2), 211–223.

    Article  Google Scholar 

  • Smith, A. M., Gosselin, G., & Houde, B. (2002). Deployment of fingertip forces in tactile exploration. Experimental Brain Research, 147(2), 209–218.

    Article  Google Scholar 

  • Song, X., Liu, H., Althoefer, K., Nanayakkara, T., & Seneviratne, L. D. (2014). Efficient break-away friction ratio and slip prediction based on haptic surface exploration. IEEE Transactions on Robotics, 30, 203–219.

    Article  Google Scholar 

  • Sornkarn, N., & Nanayakkara, T. (2017). Can a soft robotic probe use stiffness control like a human finger to improve efficacy of haptic perception? IEEE Transactions on Haptics, 10, 183–195.

    Article  Google Scholar 

  • Sornkarn, N., Howard, M., & Nanayakkara, T. (2014). Internal impedance control helps information gain in embodied perception. In 2014 IEEE international conference on robotics and automation, ICRA 2014 (pp. 6685–6690). https://academic.microsoft.com/paper/2037680753

  • Sun, T., & Liu, H. (2020). Adaptive force and velocity control based on intrinsic contact sensing during surface exploration of dynamic objects. Autonomous Robots, 44(5), 773–790.

    Article  Google Scholar 

  • Sun, T., Back, J., & Liu, H. (2018). Combining contact forces and geometry to recognize objects during surface haptic exploration. IEEE Robotics and Automation Letters, 3, 2509–2514.

    Article  Google Scholar 

  • Tanaka, Y., Tiest, W. M. B., Kappers, A. M. L., & Sano, A. (2014). Contact force and scanning velocity during active roughness perception. PLoS ONE, 9(3), 1–11.

    Article  Google Scholar 

  • Tiest, W. M. B., & Kappers, A. M. (2006). Analysis of haptic perception of materials by multidimensional scaling and physical measurements of roughness and compressibility. Acta Psychologica, 121(1), 1–20.

    Article  Google Scholar 

  • Wettels, N., & Loeb, G. E. (2011). Haptic feature extraction from a biomimetic tactile sensor: Force, contact location and curvature. In 2011 IEEE international conference on robotics and biomimetics (pp. 2471–2478).

  • Xu, D., Loeb, G. E., & Fishel, J. A. (2013). Tactile identification of objects using Bayesian exploration. In 2013 IEEE international conference on robotics and automation (pp. 3056–3061).

Download references

Acknowledgements

This project is sponsored by Shanghai Sailing Program, Project No. 21YF1414100. The authors would like to thank Dr. Jian Hu for the kind support during the experimental validations, and many thanks go to Mr. George Abrahams for his kind suggestions for the design of the algorithm.

Author information

Authors and Affiliations

Authors

Contributions

TS: Responsible for the methodology, software development, experimental validations and writing—original draft preparation and revision. HL: Responsible for the conceptualization and writing—reviewing and editing. ZM: Responsible for the conceptualization and writing—reviewing.

Corresponding author

Correspondence to Teng Sun.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix: Validations using the UCI data

Appendix: Validations using the UCI data

To verify the feasibility of the proposed method, samples obtained from the UCI Machine Learning Repository (from www.archive.ics.uci.edu/ml) were also used. Iris data, which contains 3 objects, each with 4 features, and white-wine data, which contains 5 objects (wine quality), each with 11 features in total, were used for validations. For the white-wine data, 4 objects (quality 4–7) and 8 features were selected, as some features were deemed too similar and others lacked sufficient samples, as will be described in better detail below.

The initial features in the previous simulations were consistent, however, in this section, an initial feature selection logic is introduced, based on the learned data and trained classifiers, to decide which are the best initial feature(s). Due to a lack of information in this test, the time required to acquire each feature has been set as a constant. However, it is changeable based on the practical feature sample extraction procedure.

When deciding the optimal initial feature, the mean value of one random feature \( f_i \) from all the features F is used. Another feature \( f_j \) from one of the objects is subsequently selected to form the feature vector \( {\textbf {f}}_v=(f_i, f_j) \). The weighted information gain is then calculated with the corresponding function mentioned in Sect. 2.2. Traverse all the features except \( f_i \) as the new \( f_j \) to find the feature identity(ID) that provides the largest WIG as the best feature. Repeating the process with all \( f_i \in {\textbf {F}} \) as the initial features and recording all feature IDs with the largest WIGs. To decide the overall best initial feature, a feature number vector \({\textbf {f}}_{n}\) is used to present the number of times each \(f_i \in {\textbf {F }}\) appeared as the best feature. After checking one object, the same process is repeated using another object’s features, and the \({\textbf {f}}_n\) is updated. Once the data of all the objects has been examined, the final \({\textbf {f}}_n\) is obtained, and the feature with the largest number in \({\textbf {f}}_n\) is identified as the most useful. The method is described in detail in Algorithm 3.

Using the initial feature selection method, the best initial features were determined for the iris and white-wine data. Feature 4 was chosen as the initial feature for the iris data because it provided the largest WIG 9 times. In contrast, the initial feature 8 was the best for white wine samples.

To train all possible classifiers, 40 samples for each iris feature and 125 samples for each white-wine quality feature were selected during the training phase. Forty samples were used for each feature of both objects during the testing phase (there was no validation process). Both the random and active methods were used for object recognition tests, and the results were recorded according to Sect. 3.1.

Fig. 7
figure 7

The result of one object recognition process with the UCI white-wine sample. The total number of EPs is 8. The figure depicts both the confidence and error of the random and active methods. For the random method, 7 EPs were used until the confidence exceeded the threshold (0.9). and it was successful as the error dropped to 0.083. Meanwhile, the active method used the feature 8 as the initial feature, and the confidence rose from 0.425 to 0.99 after 3 EPs (3,7,8). It shows that the active method can recognise the object more efficiently

The detailed results are shown in Table 6. It can be seen that for the iris data, the recognition accuracy is 100% for both methods since there are significant differences between the features, however, the active method outperformed the random method in terms of both time cost ratio and used EPs, especially for iris Virginica which was approximately half the cost. For the white-wine data, the recognition accuracy decreases due to the similarity of the feature values, particularly quality-6, with only 55% for the active method, and 35% for the random method. This may be due to the fact that quality-6 has the most feature samples with a broad value range, and is similar to quality-2 and quality-4 features, making recognition more difficult. In total, average accuracy of the active method is 77.5%, while the accuracy of the random method is 52%, indicating that the active method performs better. Moreover, the active method has the best time cost ratio and EPs.

Figure 7 shows one recognition process result of both the random and active methods for the white-wine sample, with quality-7 serving as the ground truth. As can be seen, the results are comparable to the simulation of artificial data in Fig. 7. In addition, with the initial feature selection method performed beforehand, the active method computation time decreases significantly (Table 3), for two reasons, firstly, the initial feature selection time cost has not been added, and secondly, it helps to increase the initial confidence.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sun, T., Liu, H. & Miao, Z. Active object perception using Bayesian classifiers and haptic exploration. Auton Robot 47, 19–36 (2023). https://doi.org/10.1007/s10514-022-10065-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10514-022-10065-6

Keywords

Navigation