Abstract:
In this letter, we focus on designing an effective method for lightweight and accurate facial action unit (AU) detection, which is essential for emotional communication i...Show MoreMetadata
Abstract:
In this letter, we focus on designing an effective method for lightweight and accurate facial action unit (AU) detection, which is essential for emotional communication in most human-robot interaction scenarios. AU detection is a delicate and challenging task because the subtle fleeting appearance changes caused by AUs are very difficult to catch and express. Therefore existing approaches mainly deal with static facial states or frame-level temporal relationships. The dynamic process of facial muscle movement, as the core feature of AU, is yet ignored and rarely exploited by prior studies. Based on such observation, we propose Flow Supervised Module (FSM) to explicitly capture the dynamic facial movement in the form of Flow and use the learned Flow to provide supervision signals for the detection model during the training stage effectively and efficiently. Furthermore, the proposed FSM can be easily incorporated into various backbone networks and boost their performance. Extensive experiments are conducted on two benchmark datasets, DISFA and BP4D, showing state-of-the-art performance with competitive detection speed.
Published in: IEEE Robotics and Automation Letters ( Volume: 6, Issue: 4, October 2021)