Authors:
Zhanyu Gao
;
Kai Chen
and
Dahai Yu
Affiliation:
TCL Corporate Research (HK) Co., Ltd, China
Keyword(s):
Transformer, Attention, Convolution.
Abstract:
Recently, the facial landmarks localization tasks based on deep learning methods have achieved promising results, but they ignore the global context information and long-range relationship among the landmarks. To address this issue, we propose a parallel multi-branch architecture combining convolutional blocks and transformer layer for facial landmarks localization named Intensive Attention in the Convolutional Vision Transformer Network (IACT), which has the advantages of capturing detailed features and gathering global dynamic attention weights. To further improve the performance, the Intensive Attention mechanism is incorporated with the Convolution-Transformer Network, which includes Multi-head Spatial attention, Feature attention, the Channel attention. In addition, we present a novel loss function named Smooth Wing Loss that fills the gap in the gradient discontinuity of the Adaptive Wing loss, resulting in better convergence. Our IACT can achieve state-of-the-art performance o
n WFLW, 300W, and COFW datasets with 4.04, 2.82 and 3.12 in Normalized Mean Error.
(More)