Abstract
We describe an authoring framework for developing virtual humans on mobile applications. The framework abstracts many elements needed for virtual human generation and interaction, such as the rapid development of nonverbal behavior, lip syncing to speech, dialogue management, access to speech transcription services, and access to mobile sensors such as the microphone, gyroscope and location components.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Baldassarri, S., Cerezo, E., Seron, F.J.: Maxine: a platform for embodied animated agents. Comput. Graph. 32(4), 430–437 (2008)
Bickmore, T., Mauer, D.: Modalities for building relationships with handheld computer agents. In: CHI 2006 Extended Abstracts on Human Factors in Computing Systems, pp. 544–549. ACM (2006)
Cassell, J., Vilhjálmsson, H.H., Bickmore, T.: Beat: the behavior expression animation toolkit. In: Prendinger, H., Ishizuka, M. (eds.) Life-Like Characters, pp. 163–185. Springer, Berlin (2004)
Gutierrez, M., Vexo, F., Thalmann, D.: Controlling virtual humans using pdas. In: The 9th International Conference on Multi-Media Modeling (MMM 2003), pp. 150–166. No. VRLAB-CONF-2007-028 (2003)
Hartholt, A., Traum, D., Marsella, S.C., Shapiro, A., Stratou, G., Leuski, A., Morency, L.-P., Gratch, J.: All together now. In: Aylett, R., Krenn, B., Pelachaud, C., Shimodaira, H. (eds.) IVA 2013. LNCS, vol. 8108, pp. 368–381. Springer, Heidelberg (2013)
Heloir, A., Kipp, M.: Real-time animation of interactive agents: specification and realization. Appl. Artif. Intel. 24(6), 510–529 (2010)
Klaassen, R., Hendrix, J., Reidsma, D., et al.: Elckerlyc goes mobile enabling technology for ecas in mobile applications. In: The Sixth International Conference on Mobile Ubiquitous Computing, Systems, Services and Technologies, UBICOMM 2012, pp. 41–47 (2012)
Kopp, S., Krenn, B., Marsella, S.C., Marshall, A.N., Pelachaud, C., Pirker, H., Thórisson, K.R., Vilhjálmsson, H.H.: Towards a common framework for multimodal generation: the behavior markup language. In: Gratch, J., Young, M., Aylett, R.S., Ballin, D., Olivier, P. (eds.) IVA 2006. LNCS (LNAI), vol. 4133, pp. 205–217. Springer, Heidelberg (2006)
Lee, J., Marsella, S.C.: Nonverbal behavior generator for embodied conversational agents. In: Gratch, J., Young, M., Aylett, R.S., Ballin, D., Olivier, P. (eds.) IVA 2006. LNCS (LNAI), vol. 4133, pp. 243–255. Springer, Heidelberg (2006)
Lee, J., Marsella, S.C.: Predicting speaker head nods and the effects of affective information. IEEE Trans. Multimedia 12(6), 552–562 (2010)
Leuski, A., Gowrisankar, R., Richmond, T., Shapiro, A., Xu, Y., Feng, A.: Mobile personal healthcare mediated by virtual humans. In: Proceedings of International Conference on Intelligent User Interfaces (2014)
Leuski, A., Traum, D.: NPCEditor: creating virtual human dialogue using information retrieval techniques. AI Mag. 32(2), 42–56 (2011)
Marsella, S., Xu, Y., Lhommet, M., Feng, A., Scherer, S., Shapiro, A.: Virtual character performance from speech. In: Proceedings of the 12th ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 25–35. ACM (2013)
Munshi, A., Ginsburg, D., Shreiner, D.: OpenGL ES 2.0 programming guide. Pearson Education, Boston (2008)
Poggi, I., Pelachaud, C., de Rosis, F., Carofiglio, V., De Carolis, B.: Greta: a believable embodied conversational agent. In: Stock, O., Zancanaro, M. (eds.) Multimodal intelligent information presentation, pp. 3–25. Springer, The Netherlands (2005)
Rincón-Nigro, M., Deng, Z.: A text-driven conversational avatar interface for instant messaging on mobile devices. IEEE Trans. Hum. Mach. Syst. 43(3), 328–332 (2013)
Sanner, M.F., et al.: Python: a programming language for software integration and development. J. Mol. Graph. Model. 17(1), 57–61 (1999)
Shapiro, A.: Building a character animation system. In: Allbeck, J.M., Faloutsos, P. (eds.) MIG 2011. LNCS, vol. 7060, pp. 98–109. Springer, Heidelberg (2011)
Traum, D.R.: Talking to virtual humans: dialogue models and methodologies for embodied conversational agents. In: Wachsmuth, I., Knoblich, G. (eds.) ZiF Research Group International Workshop. LNCS (LNAI), vol. 4930, pp. 296–309. Springer, Heidelberg (2008)
Traum, D., Swartout, W., Gratch, J., Marsella, S., Kenney, P., Hovy, E., Narayanan, S., Fast, E., Martinovski, B., Bhagat, R., Robinson, S., Marshall, A., Wang, D., Gandhe, S., Leuski, A.: Dealing with doctors: virtual humans for non-team interaction training. In: Proceedings of the 6th Annual SIGDIAL Conference, Lisbon, Portugal, September 2005
van Welbergen, H., Reidsma, D., Ruttkay, Z.M., Zwiers, J.: Elckerlyc. J. Multimodal User Interfaces 3(4), 271–284 (2009)
Xu, Y., Feng, A.W., Marsella, S., Shapiro, A.: A practical and configurable lip sync method for games. In: Proceedings of Motion on Games, pp. 131–140. ACM (2013)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this paper
Cite this paper
Feng, A.W., Leuski, A., Marsella, S., Casas, D., Kang, SH., Shapiro, A. (2015). A Platform for Building Mobile Virtual Humans. In: Brinkman, WP., Broekens, J., Heylen, D. (eds) Intelligent Virtual Agents. IVA 2015. Lecture Notes in Computer Science(), vol 9238. Springer, Cham. https://doi.org/10.1007/978-3-319-21996-7_33
Download citation
DOI: https://doi.org/10.1007/978-3-319-21996-7_33
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-21995-0
Online ISBN: 978-3-319-21996-7
eBook Packages: Computer ScienceComputer Science (R0)