ABSTRACT
We investigated whether anthropomorphic manga characters (AMCs) with eyes having different designs can inform users of the identity of the different functions of smartphones that are anthropomorphized as AMCs. Specifically, we prepared three different AMCs that 3 different artists drew, and each artist drew 32 different eye designs. In addition, we selected the four smartphone functions that the different AMC eye designs should express. As a result, although the different designs failed in informing users of the four different functions, users' requirements for smartphones or general characteristics of smartphones were clearly expressed by the different eye designs.
- Jenny Preece, Helen Sharp, and Yvonne Rogers, "Interaction Design: Beyond Human-Computer Interaction," 4th Edition. Wiley, 2015.Google Scholar
- Byron Reeves and Clifford Nass, "How people treat computers, television, and new media like real people and places," CSLI Publications and Cambridge University Press, 1996.Google Scholar
- Celso M. De Melo, Peter Carnevale, and Jonathan Gratch, "The influence of emotions in embodied agents on human decision-making," Intelligent Virtual Agents, Springer Berlin Heidelberg, 2010. Google ScholarCross Ref
- Thomas Rist, Elisabeth Andre, and Stephan Baldes, "A flexible platform for building applications with life-like characters," Proceedings of the 8th International Conference on Intelligent User Interfaces, ACM, 2003. Google ScholarDigital Library
- Itaru Kuramoto et al., "Recommendation system based on interaction with multiple agents for users with vague intention," Human-Computer Interaction. Interaction Techniques and Environments, Springer Berlin Heidelberg, 2011. 351--357. Google ScholarCross Ref
- Aya Hosoi, "Draw Manga Faces for Expressive Characters: Learn to Draw More Than 900 Faces," Impact, 2015.Google Scholar
- Helmut Prendinger, Sylvain Descamps, and Mitsuru Ishizuka, "Scripting affective communication with life-like characters in web-based interaction systems," Applied Artificial Intelligence 16.7-8 (2002): 519--553.Google Scholar
- Albert Lee, Keiichiro Oura, and Keiichi Tokuda, "MMDAgent: A fully open-source toolkit for voice interaction systems," Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on, IEEE, 2013. Google ScholarCross Ref
- Shinpei Soda et al., "Implementing virtual agent as an interface for smart home voice control," Software Engineering Conference (APSEC), 2012 19th Asia-Pacific, Vol. 1, IEEE, 2012. Google ScholarDigital Library
- Christopher Hart, "Manga for the Beginner: Everything You Need to Start Drawing Right Away!," Watson-Guptill, 2008.Google Scholar
- Itaru Kuramoto, Atsushi Yasuda, Keiko Yamamoto, and Yoshihiro Tsujino, "Designing Personality of Interaction Agents," International Workshop on Human-Agent Interaction (iHAI 2012), 2012, 1--3.Google Scholar
Index Terms
- Can Different "Eye" Designs for Anthropomorphic Manga Characters Inform Users of Different Functions of Anthropomorphized Systems?
Recommendations
Designing Animated Characters for Children of Different Ages
IDC '16: Proceedings of the The 15th International Conference on Interaction Design and ChildrenAnimated characters are commonly used in children's television, movies, and applications. Artists seek to create characters that maximally engage their audiences and tailor these characters carefully. In order to examine the relationship between ...
From A-Pose to AR-Pose: Animating Characters in Mobile AR
SIGGRAPH '21: ACM SIGGRAPH 2021 Appy HourWe present AR-Pose, a mobile AR app to generate keyframe-based animations of rigged humanoid characters. The smartphone’s positional and rotational degrees of freedom are used for two purposes: (i) as a 3D cursor to interact with inverse kinematic (IK) ...
Dynamic Manga: Animating Still Manga via Camera Movement
We propose a method for animating still manga imagery through camera movements. Given a series of existing manga pages, we start by automatically extracting panels, comic characters, and balloons from the manga pages. Then, we use a data-driven ...
Comments