MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Crowdsourcing facial responses to online videos: Extended abstract

Author(s)
McDuff, Daniel; el Kaliouby, Rana; Picard, Rosalind W.
Thumbnail
DownloadPicard_Crowdsourcing facial.pdf (5.758Mb)
OPEN_ACCESS_POLICY

Open Access Policy

Creative Commons Attribution-Noncommercial-Share Alike

Terms of use
Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/
Metadata
Show full item record
Abstract
Traditional observational research methods required an experimenter's presence in order to record videos of participants, and limited the scalability of data collection to typically less than a few hundred people in a single location. In order to make a significant leap forward in affective expression data collection and the insights based on it, our work has created and validated a novel framework for collecting and analyzing facial responses over the Internet. The first experiment using this framework enabled 3,268 trackable face videos to be collected and analyzed in under two months. Each participant viewed one or more commercials while their facial response was recorded and analyzed. Our data showed significantly different intensity and dynamics patterns of smile responses between subgroups who reported liking the commercials versus those who did not. Since this framework appeared in 2011, we have collected over three million videos of facial responses in over 75 countries using this same methodology, enabling facial analytics to become significantly more accurate and validated across five continents. Many new insights have been discovered based on crowd-sourced facial data, enabling Internet-based measurement of facial responses to become reliable and proven. We are now able to provide large-scale evidence for gender, cultural and age differences in behaviors. Today such methods are used as part of standard practice in industry for copy-testing advertisements and are increasingly used for online media evaluations, distance learning, and mobile applications.
Date issued
2015-12
URI
http://hdl.handle.net/1721.1/110774
Department
Massachusetts Institute of Technology. Media Laboratory; Program in Media Arts and Sciences (Massachusetts Institute of Technology)
Journal
2015 International Conference on Affective Computing and Intelligent Interaction (ACII)
Publisher
Institute of Electrical and Electronics Engineers (IEEE)
Citation
McDuff, Daniel, Rana el Kaliouby, and Rosalind W. Picard. “Crowdsourcing Facial Responses to Online Videos: Extended Abstract.” 2015 International Conference on Affective Computing and Intelligent Interaction (ACII), Xi'an, China, 21-24 September, 2015. IEEE, 2015. 512–518.
Version: Author's final manuscript
ISBN
978-1-4799-9953-8

Collections
  • MIT Open Access Articles

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.