skip to main content
10.1145/2491832.2491835acmconferencesArticle/Chapter ViewAbstractPublication PagessiggraphConference Proceedingsconference-collections
research-article

Towards higher quality character performance in previz

Published: 20 July 2013 Publication History

Abstract

Previsualization tools are used to obtain a preliminary but rough version of a film, television or other production. Used for both liveaction and animated films, they allow a director to set up camera angles, arrange scenes, dialogue, and other scene elements without the expense of paying live actors, constructing physical sets, or other related production costs. By seeing an early approximation of the final production, decisions about scenes, elements, story and the factors affecting it can be made early in the process, potentially reducing costs and improving overall quality. Current previsualization technologies have made inroads into generating these "videomatics", where controls over cameras and static elements, such as buildings, roads and scenery, can be quickly incorporated from a low cost libraries of 3D assets. Even the generation of effects such as explosions, running water, and smoke can be quickly generated in previz scenes from commodity software.

References

[1]
Bergmann, K., and Kopp, S. 2009. Increasing the expressiveness of virtual agents: autonomous generation of speech and gesture for spatial description tasks. In Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems-Volume 1, 361--368.
[2]
Cassell, J., Vilhjlmsson, H. H., and Bickmore, T. 2001. BEAT: the behavior expression animation toolkit. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques, 477--486.
[3]
Hetherington, J., 2006. The prevalence of previs, animation world network, Jan.
[4]
Levine, S., Theobalt, C., and Koltun, V. 2009. Real-time prosody-driven synthesis of body language. ACM Trans. Graph. 28, 5 (Dec.), 172:1--172:10.
[5]
Levine, S., Krähenbühl, P., Thrun, S., and Koltun, V. 2010. Gesture controllers. In ACM SIGGRAPH 2010 papers, ACM, New York, NY, USA, SIGGRAPH '10, 124:1--124:11.
[6]
Marsella, S., Xu, Y., Feng, A., Lhommet, M., Scherer, S., and Shapiro, A. 2013. Virtual Character Performance from Speech. In Symposium on Computer Animation, SCA '13.
[7]
Neff, M., Kipp, M., Albrecht, I., and Seidel, H.-P. 2008. Gesture modeling and animation based on a probabilistic recreation of speaker style. ACM Transactions on Graphics 27, 1, 5.
[8]
Niewiadomski, R., Bevacqua, E., Mancini, M., and Pelachaud, C. 2009. Greta: an interactive expressive eca system. In Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems - Volume 2, International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, AAMAS '09, 1399--1400.
[9]
Scherer, S., Kane, J., Gobl, C., and Schwenker, F. 2013. Investigating fuzzy-input fuzzy-output support vector machines for robust voice quality classification. Computer Speech and Language 27, 1, 263--287.
[10]
SFM, 2013. Source filmmaker, http://www.sourcefilmmaker.com/.
[11]
Shapiro, A. 2011. Building a character animation system. In Motion in Games, J. Allbeck and P. Faloutsos, Eds., vol. 7060 of Lecture Notes in Computer Science. Springer Berlin Heidelberg, 98--109.
[12]
Xtranormal, 2013. Xtranormal, http://www.xtranormal.com/.
[13]
Xu, Y., Feng, A. W., and Shapiro, A. 2013. A simple method for high quality artist-driven lip syncing. In Proceedings of the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games, ACM, New York, NY, USA, I3D '13, 181--181.

Cited By

View all
  • (2023)Zero-shot style transfer for gesture animation driven by text and speech using adversarial disentanglement of multimodal style encodingFrontiers in Artificial Intelligence10.3389/frai.2023.11429976Online publication date: 12-Jun-2023
  • (2021)IdlePose : A Dataset of Spontaneous Idle MotionsCompanion Publication of the 2021 International Conference on Multimodal Interaction10.1145/3461615.3485400(164-168)Online publication date: 18-Oct-2021
  • (2016)Self-Reported Symptoms of Depression and PTSD Are Associated with Reduced Vowel Space in Screening InterviewsIEEE Transactions on Affective Computing10.1109/TAFFC.2015.24402647:1(59-73)Online publication date: 1-Jan-2016
  • Show More Cited By
  1. Towards higher quality character performance in previz

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    DigiPro '13: Proceedings of the Symposium on Digital Production
    July 2013
    52 pages
    ISBN:9781450321365
    DOI:10.1145/2491832
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 20 July 2013

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. animation
    2. gestures
    3. previsualization

    Qualifiers

    • Research-article

    Conference

    DigiPro '13
    Sponsor:
    DigiPro '13: The Digital Production Symposium
    July 20, 2013
    California, Anaheim

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)5
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 02 Mar 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2023)Zero-shot style transfer for gesture animation driven by text and speech using adversarial disentanglement of multimodal style encodingFrontiers in Artificial Intelligence10.3389/frai.2023.11429976Online publication date: 12-Jun-2023
    • (2021)IdlePose : A Dataset of Spontaneous Idle MotionsCompanion Publication of the 2021 International Conference on Multimodal Interaction10.1145/3461615.3485400(164-168)Online publication date: 18-Oct-2021
    • (2016)Self-Reported Symptoms of Depression and PTSD Are Associated with Reduced Vowel Space in Screening InterviewsIEEE Transactions on Affective Computing10.1109/TAFFC.2015.24402647:1(59-73)Online publication date: 1-Jan-2016
    • (2014)Previsualization with computer animation (Previs)Proceedings of the 26th Australian Computer-Human Interaction Conference on Designing Futures: the Future of Design10.1145/2686612.2686616(11-20)Online publication date: 2-Dec-2014

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media