skip to main content
10.1145/3503161.3549204acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

Demographic Feature Isolation for Bias Research using Deepfakes

Published: 10 October 2022 Publication History

Abstract

This paper explores the complexity of what constitutes the demographic features of race and how race is perceived. "Race" is composed of a variety of factors including skin tone, facial features, and accent. Isolating these interrelated race features is a difficult problem and failure to do so properly can easily invite confounding factors. Here we propose a novel method to isolate features of race by using AI-based technology and measure the impact these modifications have on an outcome variable of interest; i.e., perceived credibility. We used videos from a deception dataset for which the ground-truth is known and create three conditions: 1) a Black vs White CycleGAN image condition; 2) an original vs deepfake video condition; 3) an original vs deepfake still frame condition. We crowd-sourced 1736 responses to measure how credibility was influenced by changing the perceived race. We found that it is possible to alter perceived race through modifying demographically visual features alone. However, we did not find any statistically significant differences for credibility across our experiments based on these changes. Our findings help quantify intuitions from prior research that the relationship between racial perception and credibility is more complex than visual features alone. Our presented deepfake framework could be incorporated to precisely measure the impact of a wider range of demographic features (such as gender or age) due to the fine-grained isolation and control that was previously impossible in a lab setting.

Supplementary Material

MP4 File (MM25-brave.mp4)
We live in a biased world. I'm biased, you're biased, we are all biased. We propose a brave new framework to systematically explore various biases (by bias, we mean race, gender, age, etc.) and we do this through feature isolation. The idea is, by holding all features constant and varying one feature, we can more definitely know how that feature is contributing to the bias. Our team was motivated to take a chance on this because if we could more precisely measure bias, it might convince lawmakers, law enforcers and executives of all kinds to handle issues of bias more logically and fairly.

References

[1]
[n.d.]. National Library of Medicine How to control confounding effects by statistical analysis. How to control confounding effects by statistical analysis. Accessed: 2022-04-14.
[2]
[n.d.]. Political Analytics An Empirical Justification for the Use of Racially Distinctive Names to Signal Race in Experiments. www.cambridge.org/core/journals/political-analysis/article/an-empirical- justification-for-the-use-of-racially-distinctive-names-to-signal-race-in- experiments/DBC39F875F2DC0F65E7140FC721CE1EB. Accessed: 2022-04-14.
[3]
[n.d.]. Science Direct Confounding: What it is and how to deal with it. https://www.sciencedirect.com/science/article/pii/S0085253815529748. Accessed: 2022-04-14.
[4]
[n.d.]. United States Sentencing Commission demographic sentencing. https://www.ussc.gov/research/research-reports/demographic-differences-sentencing. Accessed: 2022-04--14.
[5]
[n.d.]. Wiley Online Library Overcoming confounding of race with socio- economic status and segregation to explore race disparities in smoking. https: //onlinelibrary.wiley.com/doi/full/10.1111/j.1360-0443.2007.01956.x. Accessed: 2022-04-14.
[6]
Marianne Bertrand and Sendhil Mullainathan. 2004. Are Emily and Greg more employable than Lakisha and Jamal? A field experiment on labor market discrimination. American economic review 94, 4 (2004), 991--1013.
[7]
Johnny Botha and Heloise Pieterse. 2020. Fake news and deepfakes: A dangerous threat for 21st century information security. In ICCWS 2020 15th International Conference on Cyber Warfare and Security. Academic Conferences and publishing limited. 57.
[8]
Michael Buhrmester, Tracy Kwang, and Samuel D Gosling. 2016. Amazon's Mechanical Turk: A new source of inexpensive, yet high-quality data? (2016).
[9]
Daniel M Butler and Jonathan Homola. 2017. An empirical justification for the use of racially distinctive names to signal race in experiments. Political Analysis 25, 1 (2017), 122--130.
[10]
Simone Fabbrizzi, Symeon Papadopoulos, Eirini Ntoutsi, and Ioannis Kompat- siaris. 2021. A survey on bias in visual datasets. arXiv preprint arXiv:2107.07919 (2021).
[11]
Chloë FitzGerald and Samia Hurst. 2017. Implicit bias in healthcare professionals: a systematic review. BMC medical ethics 18, 1 (2017), 1--18.
[12]
Victoria Groom, Jeremy N Bailenson, and Clifford Nass. 2009. The influence of racial embodiment on racial bias in immersive virtual environments. Social Influence 4, 3 (2009), 231--248.
[13]
Gül Günaydin, Vivian Zayas, Emre Selcuk, and Cindy Hazan. 2012. I like you but I don't know why: Objective facial resemblance to significant others influences snap judgments. Journal of Experimental Social Psychology 48, 1 (2012), 350--353.
[14]
John J Horton, David G Rand, and Richard J Zeckhauser. 2011. The online labo- ratory: Conducting experiments in a real labor market. Experimental economics 14, 3 (2011), 399--425.
[15]
KJ Jager, C Zoccali, A Macleod, and FW Dekker. 2008. Confounding: what it is and how to deal with it. Kidney international 73, 3 (2008), 256--260.
[16]
Tero Karras, Samuli Laine, and Timo Aila. 2019. A style-based generator ar- chitecture for generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 4401--4410.
[17]
Elizabeth A Klonoff and Hope Landrine. 2000. Is skin color a marker for racial discrimination? Explaining the skin color--hypertension relationship. Journal of behavioral medicine 23, 4 (2000), 329--338.
[18]
Margaret Bull Kovera. 2019. Racial disparities in the criminal justice system: Prevalence, causes, and a search for solutions. Journal of Social Issues 75, 4 (2019), 1139--1164.
[19]
Thomas A LaVeist, Roland J Thorpe Jr, GiShawn A Mance, and John Jackson. 2007. Overcoming confounding of race with socio-economic status and segregation to explore race disparities in smoking. Addiction 102 (2007), 65--70.
[20]
Zeus Leonardo. 2004. The color of supremacy: Beyond the discourse of ?white privilege'. Educational philosophy and theory 36, 2 (2004), 137--152.
[21]
Ivy W Maina, Tanisha D Belton, Sara Ginzberg, Ajit Singh, and Tiffani J Johnson. 2018. A decade of studying implicit racial/ethnic bias in healthcare providers using the implicit association test. Social Science & Medicine 199 (2018), 219--229.
[22]
Jason P Nance. 2015. Over-disciplining students, racial bias, and the school-to- prison pipeline. U. Rich. L. Rev. 50 (2015), 1063.
[23]
Eirini Ntoutsi, Pavlos Fafalios, Ujwal Gadiraju, Vasileios Iosifidis, Wolfgang Nejdl, Maria-Esther Vidal, Salvatore Ruggieri, Franco Turini, Symeon Papadopoulos, Emmanouil Krasanakis, et al. 2020. Bias in data-driven artificial intelligence systems-An introductory survey. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 10, 3 (2020), e1356.
[24]
Konstantin A Pantserev. 2020. The malicious use of AI-based deepfake technology as the new threat to psychological security and political stability. In Cyber defence in the age of AI, smart societies and augmented humanity. Springer, 37--55.
[25]
Gabriele Paolacci, Jesse Chandler, and Panagiotis G Ipeirotis. 2010. Running experiments on amazon mechanical turk. Judgment and Decision making 5, 5 (2010), 411--419.
[26]
Tabitha C Peck, Sofia Seinfeld, Salvatore M Aglioti, and Mel Slater. 2013. Putting yourself in the skin of a black avatar reduces implicit racial bias. Consciousness and cognition 22, 3 (2013), 779--787.
[27]
Ivan Perov, Daiheng Gao, Nikolay Chervoniy, Kunlin Liu, Sugasa Marangonda, Chris Umé, Mr. Dpfks, Carl Shift Facenheim, Luis RP, Jian Jiang, Sheng Zhang, Pingyu Wu, Bo Zhou, and Weiming Zhang. 2020. DeepFaceLab: A simple, flexible and extensible face swapping framework. ArXiV (2020).
[28]
Joan Petersilia. 1985. Racial disparities in the criminal justice system: A summary. Crime & Delinquency 31, 1 (1985), 15--34.
[29]
Mohamad Amin Pourhoseingholi, Ahmad Reza Baghestani, and Mohsen Vahedi. 2012. How to control confounding effects by statistical analysis. Gastroenterology and hepatology from bed to bench 5, 2 (2012), 79.
[30]
KR Prajwal, Rudrabha Mukhopadhyay, Vinay P Namboodiri, and CV Jawahar. 2020. A lip sync expert is all you need for speech to lip generation in the wild. In Proceedings of the 28th ACM International Conference on Multimedia. 484--492.
[31]
Prasanna Sattigeri, Samuel C Hoffman, Vijil Chenthamarakshan, and Kush R Varshney. 2018. Fairness gan. arXiv preprint arXiv:1805.09910 (2018).
[32]
Maya Sen and Omar Wasow. 2016. Race as a bundle of sticks: Designs that estimate effects of seemingly immutable characteristics. Annual Review of Political Science 19 (2016), 499--522.
[33]
Taylan Sen, Md Kamrul Hasan, Zach Teicher, and Mohammed Ehsan Hoque. 2018. Automated dyadic data recorder (ADDR) framework and analysis of facial cues in deceptive communication. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 1, 4 (2018), 1--22.
[34]
Sandra Susan Smith. 2010. Race and trust. Annual Review of Sociology 36 (2010), 453--475.
[35]
Cheryl Staats. 2014. Implicit racial bias and school discipline disparities. Exploring the connection (2014).
[36]
Tyler J VanderWeele and Whitney R Robinson. 2014. On causal interpretation of race in regressions adjusting for confounding and mediating variables. Epidemiology (Cambridge, Mass.) 25, 4 (2014), 473.
[37]
Konstantinos Vougioukas, Stavros Petridis, and Maja Pantic. 2020. Realistic speech-driven facial animation with gans. International Journal of Computer Vision 128, 5 (2020), 1398--1413.
[38]
Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. 2017. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Pro- ceedings of the IEEE international conference on computer vision. 2223--2232.

Index Terms

  1. Demographic Feature Isolation for Bias Research using Deepfakes

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      MM '22: Proceedings of the 30th ACM International Conference on Multimedia
      October 2022
      7537 pages
      ISBN:9781450392037
      DOI:10.1145/3503161
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 10 October 2022

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. artificial intelligence
      2. confounding factors
      3. credibility
      4. racial bias

      Qualifiers

      • Research-article

      Funding Sources

      • U.S. Defense Advanced Research Projects Agency (DARPA)

      Conference

      MM '22
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 2,145 of 8,556 submissions, 25%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 179
        Total Downloads
      • Downloads (Last 12 months)72
      • Downloads (Last 6 weeks)8
      Reflects downloads up to 25 Dec 2024

      Other Metrics

      Citations

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media