ABSTRACT
Datasets, algorithms, machine learning models and AI powered tools that are used for perception, prediction and decision making constitute the core of affective and wellbeing computing. Majority of these are prone to data or algorithmic bias (e.g., along the demographic attributes of race, age, gender etc.) that could have catastrophic consequences for various members of the society. Therefore, making considerations and providing solutions to avoid and/or mitigate these are of utmost importance for creating and deploying fair and unbiased affective and wellbeing computing systems. This talk will present the Cambridge Affective Intelligence and Robotics (AFAR) Lab's (https://cambridge-afar.github.io/) research explorations in this area.
The first part of the talk will discuss the lack of publicly available datasets with consideration for fair distribution across the human population and it will present a systematic investigation of bias and fairness in facial expression recognition and mental health prediction by comparing various approaches on well-known benchmark datasets. The second part of the talk will question whether counterfactuals can provide a solution for data imbalance, and will introduce an attempt to achieve fairer prediction models for facial expression recognition, while noting the limitations of a counterfactual approach employed at the pre-processing, in-processing and post-processing stages to mitigate for bias.
Majority of the ML methods aiming to mitigate bias focus on balancing data distributions or learning to adapt to the imbalances by adjusting the learning algorithm. The third and last part of the talk will introduce our work demonstrating how continual learning (CL) approaches are well-suited for mitigating bias by balancing learning with respect to different attributes such as race and gender, without compromising on recognition accuracy. The talk at its various stages will also outline recommendations to achieve greater fairness for affective and wellbeing computing, while emphasising the need for such models to be deployed and tested in real world settings and applications, for example for robotic wellbeing coaching via physical robots.
- Jiaee Cheong, Sinan Kalkan, and Hatice Gunes. 2021. The Hitchhiker's Guide to Bias and Fairness in Facial Affective Signal Processing: Overview and techniques. IEEE Signal Process. Mag. , Vol. 38, 6 (2021), 39--49.Google ScholarCross Ref
- Jiaee Cheong, Sinan Kalkan, and Hatice Gunes. 2022. Counterfactual Fairness for Facial Expression Recognition. In Computer Vision - ECCV 2022 Workshops - Tel Aviv, Israel, October 23--27, 2022, Proceedings, Part V. 245--261.Google Scholar
- Jiaee Cheong, Selim Kuzucu, Sinan Kalkan, and Hatice Gunes. 2023. Towards Gender Fairness for Mental Health Prediction. In Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, IJCAI 2023, 19th-25th August 2023, Macao, SAR, China. 5932--5940.Google ScholarCross Ref
- Jiaee Cheong, Micol Spitale, and Hatice Gunes. 2023 b. "It's not Fair!" -- Fairness for a Small Dataset of Multi-modal Dyadic Mental Well-being Coaching. In 11th International Conference on Affective Computing and Intelligent Interaction, ACII 2023, MIT Media Lab, Cambridge, MA, US, September 10--13, 2023. 1--8.Google Scholar
- Nikhil Churamani, Minja Axelsson, Atahan Caldir, and Hatice Gunes. 2022a. Continual Learning for Affective Robotics: A Proof of Concept for Wellbeing. In 10th International Conference on Affective Computing and Intelligent Interaction, ACII 2022 - Workshops and Demos, Nara, Japan, October 17--21, 2022. 1--8.Google Scholar
- Nikhil Churamani, Ozgur Kara, and Hatice Gunes. 2022b. Domain-Incremental Continual Learning for Mitigating Bias in Facial Expression and Action Unit Recognition. IEEE Transactions on Affective Computing (2022), 1--15.Google ScholarDigital Library
- Tian Xu, Jennifer White, Sinan Kalkan, and Hatice Gunes. 2020. Investigating Bias and Fairness in Facial Expression Recognition. In Computer Vision - ECCV 2020 Workshops - Glasgow, UK, August 23--28, 2020, Proceedings, Part VI. 506--523. nGoogle Scholar
Index Terms
- Fairness for Affective and Wellbeing Computing
Recommendations
Affective computing: a reverence for a century of research
COST'11: Proceedings of the 2011 international conference on Cognitive Behavioural SystemsTo bring affective computing a leap forward, it is best to start with a step back. A century of research has been conducted on topics, which are crucial for affective computing. Understanding this vast amount of research will accelerate progress on ...
Affective computing vs. affective placebo
Relaxation training is an application of affective computing with important implications for health and wellness. After detecting user s affective state through physiological sensors, a relaxation training application can provide the user with explicit ...
Data Subjects' Conceptualizations of and Attitudes Toward Automatic Emotion Recognition-Enabled Wellbeing Interventions on Social Media
CSCW2Automatic emotion recognition (ER)-enabled wellbeing interventions use ER algorithms to infer the emotions of a data subject (i.e., a person about whom data is collected or processed to enable ER) based on data generated from their online interactions, ...
Comments