skip to main content
10.1145/3174910.3174944acmotherconferencesArticle/Chapter ViewAbstractPublication PagesahConference Proceedingsconference-collections
short-paper

Recognition and Feedback of Vowel Utterance with a Good Mouth Shape Based on Sensing Platysma Muscle Bulging

Published: 06 February 2018 Publication History

Abstract

In public speaking, speakers are evaluated on verbal delivery and nonverbal delivery, and in particular, the mouth shape has an important role to support both of these. The mouth shape is mainly set during vowel utterance. We define the mouth shape, which can prompt the pronunciation of the speaker clearly and enrich the facial expression, as a good mouth shape in this research. The authors assume that a good mouth shape can be inferred from the bulging of the platysma muscle in the neck. We aim to support vowel utterances with a good mouth shape, and propose a system to recognize them. Specifically, we measure the uplift of the platysma muscle with photoreflectors and apply a machine learning method to implement a system to judge whether vowel utterances are being performed with a good shape. We conduct an accuracy measurement experiment of the proposed system and report the result. Finally, we describe the application that provides feedback of vowel utterances with a good mouth shape.

References

[1]
Yannis M. Assael, Brendan Shillingford, Shimon Whiteson, Nando de Freitas. LipNet: End-to-End Sentence-level Lipreading, In arXiv, 2016.
[2]
Michelle Fung, Yina Jin, RuJie Zhao, Mohammed (Ehsan) Hoque. ROC speak: semi-automated personalized feedback on nonverbal behavior from recorded videos. In Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing, 2015.
[3]
Swai Johar. Emotion, Affect and Personality in Speech. Springer, p. 1--6, 2016.
[4]
Rodrigues IG. Verbal and nonverbal signals in face-to-face interaction: a theoretical framework for a holistic micro-analysis (The example of a parenthesis). In Interacting Bodies, 15--18 2005.
[5]
Xiang Li, and Jun Rekimoto. SmartVoice: A Presentation Support System for Overcoming the Language Barriers, In CHI2014, 2014.
[6]
Daryush D. Mehta, Member, IEEE, Jarrad H. Van Stan, and Robert E. Hillman. Relationships between vocal function measures derived from an acoustic microphone and a subglottal neck-surface accelerometer. In 2016 IEEE, 2016.
[7]
Peter Roach. English Phonetics and Phonology. In Cambridge University Press, 2009.
[8]
Lopez-Cozar R, Araki M. Spoken, Multilingual and Multimodal Dialogue systems: Development and Assessment, In Wiley, 2006.
[9]
Tato R, Santos R, Kompe R, Pardo JM, Space improves emotion recognition. In Proceedings of international conference on speech language processing (ICSLP), 2029--2032, 2002.
[10]
Yuta Sugiura, Takeo Igarashi, Masahiko Inami. Detecting Stretch on Fabric Using photo Transmissivity Change. In Journal of The Virtual Reality Society of Japan, Vol.20, No.2, pp. 115--121, 2015.
[11]
J. Sundberg. Chest wall vibrations in singers. In J. Speech Hear. Res., vol. 26, no. 3, pp. 329--340, 1983.
[12]
M. Iftekhar Tanveer, Ru Zhao, Kezhen Chen, Zoe Tiet, Mohammed Ehsan Hoque. AutoManner: An Automated Interface for Making Public Speakers Aware of Their Mannerisms, In Proceedings of the 21st International Conference on Intelligent User Interfaces, 2016.
[13]
H. Trinh, R. Asadi, D. Edge, T. Bickmore. RoboCOP: A Robotic Coach for Oral Presentations. In Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, v. 1 n. 2, p. 1--24, 2017.
[14]
Xinlei Zhang, Takashi Miyaki, Jun Rekimoto. WithYou: An Interactive Shadowing Coach with Speech Recognition. In UIST2016, 2016.

Index Terms

  1. Recognition and Feedback of Vowel Utterance with a Good Mouth Shape Based on Sensing Platysma Muscle Bulging

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    AH '18: Proceedings of the 9th Augmented Human International Conference
    February 2018
    229 pages
    ISBN:9781450354158
    DOI:10.1145/3174910
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 06 February 2018

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Machine Learning
    2. Mouth Shape
    3. Presentation Training
    4. Public Speech

    Qualifiers

    • Short-paper
    • Research
    • Refereed limited

    Conference

    AH2018
    AH2018: The 9th Augmented Human International Conference
    February 7 - 9, 2018
    Seoul, Republic of Korea

    Acceptance Rates

    Overall Acceptance Rate 121 of 306 submissions, 40%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 172
      Total Downloads
    • Downloads (Last 12 months)7
    • Downloads (Last 6 weeks)2
    Reflects downloads up to 20 Feb 2025

    Other Metrics

    Citations

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media