Abstract:
Temporal Action Detection(TAD) is a challenge task in video understanding. The current methods mainly use global features for boundary matching or predefine all possible ...Show MoreMetadata
Abstract:
Temporal Action Detection(TAD) is a challenge task in video understanding. The current methods mainly use global features for boundary matching or predefine all possible proposals, while ignoring long context information and local action boundary features, resulting in the decline of detection accuracy. To fill this gap, we propose a Dilation Location Network (DL-Net) model to generate more precise action boundaries by enhancing boundary features of actions and aggregating long contextual information in this paper. Specifically, we design the boundary feature enhancement (BFE) block, which strengthens the actions boundary feature and fuses the similar feature of the different channels by pooling and channel squeezing. Meanwhile, in action location, we design multiple dilated convolutional structures to aggregate long contextual information of time point/interval. We conduct extensive experiments on ActivityNet-1.3 and Thumos14 show that DL-Net is capable of enhancing action boundary features and aggregating long contextual information effectively.
Published in: ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
Date of Conference: 04-10 June 2023
Date Added to IEEE Xplore: 05 May 2023
ISBN Information: