Abstract
Providing vibrotactile feedback that corresponds to the state of the virtual texture surfaces allows users to sense haptic properties of them. However, hand-tuning such vibrotactile stimuli for every state of the texture takes much time. Therefore, we propose a new approach to create models that realize the automatic vibrotactile generation from texture images or attributes. In this paper, we make the first attempt to generate the vibrotactile stimuli leveraging the power of deep generative adversarial training. Specifically, we use conditional generative adversarial networks (GANs) to achieve generation of vibration during moving a pen on the surface. The preliminary user study showed that users could not discriminate generated signals and genuine ones and users felt realism for generated signals. Thus our model could provide the appropriate vibration according to the texture images or the attributes of them. Our approach is applicable to any case where the users touch the various surfaces in a predefined way.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Culbertson, H., Unwin, J., Kuchenbecker, K.J.: Modeling and rendering realistic textures from unconstrained tool-surface interactions. IEEE Trans. Haptics 7(3), 381–393 (2014)
Shin, S., Osgouei, R.H., Kim, K.D., Choi, S.: Data-driven modeling of isotropic haptic textures using frequency-decomposed neural networks. In: IEEE World Haptics Conference, WHC 2015, pp. 131–138 (2015)
Goodfellow, I., Pouget-Abadie, J., Mirza, M., et al.: Generative Adversarial Nets. Adv. Neural. Inf. Process. Syst. 27, 2672–2680 (2014)
Mirza, M., Osindero, S.: Conditional Generative Adversarial Nets. arXiv preprint arXiv:1411.1784 (2014)
Reed, S., Akata, Z., Yan, X., et al.: Generative adversarial text to image synthesis. In: ICML, pp. 1060–1069 (2016)
Isola, P., Zhu, J.-Y., Zhou, T., Efros, A.A.: Image-to-Image Translation with Conditional Adversarial Networks. In: CVPR (2017)
Ledig, C., Theis, L., Huszar, F., et al.: Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network. arXiv preprint arXiv:1609.04802 (2016)
Odena, A., Olah, C., Shlens, J.: Conditional Image Synthesis With Auxiliary Classifier GANs. arXiv preprint arXiv:1610.09585 (2016)
Chen, L., Srivastava, S., Duan, Z., Xu, C.: Deep Cross-Modal Audio-Visual Generation. arXiv preprint arXiv:1704.08292 (2017)
Griffin, D.: Signal estimation from modified short-time Fourier transform. IEEE Trans. Acoust. Speech Sig. Process. 32(2), 236–243 (1984)
Strese, M., Schuwerk, C.: Multimodal feature-based surface material classification. IEEE Trans. Haptics 10(2), 226–239 (2017)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR, pp. 770–778 (2016)
Russakovsky, O., Deng, J., Su, H., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015)
Jin, Y., Zhang, J., Li, M., et al.: Towards the Automatic Anime Characters Creation with Generative Adversarial Networks. arXiv preprint arXiv:1708.05509 (2017)
Kodali, N., Abernethy, J., Hays, J., Kira, Z.: On Convergence and Stability of GANs. arXiv preprint arXiv:1705.07215 (2017)
Lee, K., Hicks, G., Nino-Murcia, G.: Validity and reliability of a scale to assess fatigue. Psychiatry Res. 36, 291–298 (1991)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG, part of Springer Nature
About this paper
Cite this paper
Ujitoko, Y., Ban, Y. (2018). Vibrotactile Signal Generation from Texture Images or Attributes Using Generative Adversarial Network. In: Prattichizzo, D., Shinoda, H., Tan, H., Ruffaldi, E., Frisoli, A. (eds) Haptics: Science, Technology, and Applications. EuroHaptics 2018. Lecture Notes in Computer Science(), vol 10894. Springer, Cham. https://doi.org/10.1007/978-3-319-93399-3_3
Download citation
DOI: https://doi.org/10.1007/978-3-319-93399-3_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-93398-6
Online ISBN: 978-3-319-93399-3
eBook Packages: Computer ScienceComputer Science (R0)